Category Archives: Code Reviews

Exploring Peer Review in the Computer Science Classroom: Part 2 (Exciting Conclusion)

Now, where was I?

Oh yeah, I was reviewing this paper, and getting right to the good part – the experiment method, data, and results.

I hate to disappoint you, but this section starts as a bit of a downer:

…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.

Yikes.  Oh well, let’s see what happened…

One issue we discovered was that our design was too broad to give definitive results.  We did, however, have enough data to allow us to narrow down the focus for a second study.  From the analysis of one class, we were able to identify two interesting areas for further research.  The type of review appears [sic] have a significant effect on the length and focus of the review.  We also found evidence that students reviewed some of the concepts differently than they did others.  Both of these findings should be explored in the future work.

Good.  At least there’s some groundwork for somebody to do a future study.

Reading on, I’m impressed with the scope with which the writers performed their experiment.  For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60.  Ambitious.  Nice.

While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.

However, maybe I spoke too soon:

The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”

Hm.  Doesn’t sound that great.

Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:

The number of responses to the questionnaires was low.  Three classes had no post-questionnaire responses at all.  Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…

Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study.  Nice.  But, again, it wasn’t all flowers and hugs:

While the amount of data was large, it is also incomplete.  Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to.  Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis.  One class provided no viable information at all…

Things just seem to get worse and worse for these guys.

And, not to kick them while they’re down, but their writing seems to get worse and worse too.  I’m noticing more typos and tense errors as I go along.  Maybe the stress was getting to them…

So, what did they find?  Drum roll, please…

Final Results

Surprise!  It’s inconclusive!

It sounds like they found more questions than answers…

Is it type or order (or both) that is causing the effects the [sic] training and peer review?

It took me a little while to figure out what they were asking here.  Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor.  It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step.  But the peer review step tended to produce more relevant comments.  The experimenters have a couple of theories on why that is:

  1. Social pressure could cause students to be less verbose on their critiques on one another.
  2. Increased learning after the training exercise could cause the students to be more succinct and precise in their reviews
  3. Students were more engaged in the training exercise, and felt more inclined to be more verbose (even though they were less precise)

Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment.  So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.

They had questions on other things too…

Why are there differences in how concepts are reviewed?

  • Are there differences in conceptual difficulty?
  • Do the reviews improve student learning of these concepts?

The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation.  The experimenters also theorized that successful reviews go through the following steps:

  1. Analysis
  2. Evaluation
  3. Explanation
  4. Revision

(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention.  They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.

More questions they brought up…

Is reviewing an engaging and interesting task in computer science?

Very good question.  The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process.  They note that it is important to look at these attitudes over time for improvements or problems.

Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?

I’ll let them explain this one:

While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do.  We require some sort of baseline to compare our efforts to.  We need a control group in our experiments in order to judge effectiveness.

And then the paper pretty much ends.

Where To Go From Here

The authors do a good job of lining up some interesting questions towards the end.  I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.

Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.

Exploring Peer Review in the Computer Science Classroom: Part 1

Exploring Peer Review in the Computer Science Classroom

by Scott Turner and Manuel A. Pérez Quiñones

I’m new at reading papers, so I’ve gotten used to 5 or 10 page-ers.  This looks like the big one, though – 69 pages.

I assume they have something significant to say.  The title sure sounds interesting, especially considering what I’m looking for.

Anyhow, it’s a big paper, and there’s a lot to go through.  Let’s get started.

Right off the bat, I can see that they’re interested in the same problem that I am:

Peer review, while it has many known benefits (Zeller 2000; Papalaskari 2003; Wolfe 2004; Hamer, Ma et al. 2005; Trytten 2005), and is used extensively in other fields (Falchikov and Goldfinch 2000; Topping, Smith et al. 2000; Liu and Hansen 2002; Dossin 2003; Carlson and Berry 2005) and in industry (Anderson and Shneiderman 1977; Anewalt 2005; Hundhausen, Agrawal et al. 2009), is not as widely used in the computer science curriculum.  This may be due, in part, to a lack of information about what, who and when to review in order to achieve specific goals in computer science.

This next part got my attention:

That is not to say that the literature is silent on these issues.  The studies make these choices but there are few reasons given for the decisions and even fewer comparisons performed to show relative value of those options.

Holy smokes.  A pretty bold critique of those previous papers (at least one of which, I’ve already reviewed).  It sounds like Scott and Manuel were as disappointed in some of the peer code review literature as I was.  If I was part of an audience that was being read this paper aloud, I would “wooo!” at this point.

What is needed is a clearer understanding of the requirements that the discipline imposes on the peer review process so that it may be effectively used.

Cool – I’m looking forward to seeing what they dug up.

Reading onwards through the introduction, I’m seeing the same basic arguments for peer code review that I’ve seen elsewhere.  I’ll summarize:

  • PCR (peer code review)  involves active use of higher levels of Bloom’s Taxonomy:  synthesis, analysis and evaluation, both for reviewers and review-ees
  • PCR prepares students for industry, since code review is (or should be) a common part of professional software development

Soon after, a good point is brought up – PCR is potentially a beneficial learning activity, but it all depends on the goals of a particular assignment.  A particular review structure may be better for improving code quality, and another might be better for increasing student motivation.  These considerations need to be taken into account when choosing a PCR structure.

By PCR structure, I think the writers mean:

  • What is being reviewed – source code vs design diagrams, for example
  • Who is being reviewed – students could review one another, or they could all review a piece of code provided by the instructor
  • When the review occurs – what level of students should take part in PCR?  Is there a minimum grade level that must be achieved before PCR is effective?  And when, during a project, should PCR happen?  Early in the design process, or after-the-fact – like a “code autopsy”?

These are good questions.  No wonder this paper is so long – I think their experiment is going to try to answer them all.

And just when things are starting to get good…they go into a literature review.  Yech.  I know it’s important to lay the groundwork, but I think I just felt my eyes turn to oatmeal.

The literature starts by briefly listing off those sources again – past papers who have tried to deal with this topic.  They categorize them based on what the papers try to deliver, and then they give them a light slam for not having a scientific basis on which to form the structures of their PCR structures.  I heartily agree.

The first part of the literature review discusses the benefits of peer review in other fields.  Papers as far back as the 1950’s are cited.  I think it goes without saying that having other people critique your work can be a great way of receiving constructive feedback (ask any playwrite, for example).  I guess these fellas feel they have to be pretty rigorous though, and really ground their argument in some solid past work.  Power to them.

An interesting notion is brought up – peer reviews over an extended length of time within the same groups helps cultivate “interaction between…peers and for the building of knowledge”.  One-time reviews, however, “[lends] itself more to a cognitive approach…more attention can be paid to the changes in the students’ thought processes”.  Hm.

This fellow named Topping seems to be quite popular with these guys.  Apparently, he came up with something called “Topping’s Peer Review Topology”, and I get the feeling that he is one of their primary sources in figuring out different ways of constructing PCR structures.

Oh, and an important morsel just got snuck in:

We are interested in how the student is affected by the acts of reviewing and being reviewed rather than by the social interaction occurring during the process.

So that sense of community that that other paper was talking about – not being looked at here.

The paper then goes on to chisel down some of the jargon from Topping’s Topology, and make it fit the field of Computer Science.  By doing this, the writers simpy reiterate the variables they’re going to be playing with:  what, who, and when.

The paper then dives into a two page summary of some peer review papers, and the various results that they found.  They note a few instances where peer review seemed to improve student performance, and other cases where peer review resulted in semi-disaster.  There are just as many theories as there are conflicting accounts.  From reading this stuff, it seems that peer review is a vast and complicated topic, and nobody seems to really have a firm grasp on it just yet.

Likewise, rubric creation seems to be a bit of a contentious topic:

While there seems to be a general consensus that rubrics are important and that they improve the peer review activity, there is not as much agreement on how they should be implemented.

However, it is clear that rubrics are useful in peer review for novice reviewers:

Rubrics can supply the guidance students need to learn how to evaluate an assignment.  It provides the needed scaffolding until the students are comfortable witht he process and the domain to make correct judgements.

Makes sense to me.

The paper then asks an important question: what makes a “successful” review in the education context?  Greater understanding?  Learning new concepts?  Improved grades?  Better designs?  Fewer code defects?

The answer:  it really depends on the instructor, and what the course wants from the PCR process.  For example, what is more important – having the reviewer learn to review?  Or having the review-ee receive good feedback?  Or both?  PCR is complex, and has lots of things to consider…

The next section highlights how technology is used to support peer review.  One particularly interesting example, is of a Moodle module that allows for peer reviews on assignments.  Apparently, the authors are fans.  I’ve never used Moodle before, and haven’t yet found the module that they’re talking about, but it sounds worth investigation.

The very next section details their experiment – their method, their data, and their results.  However, this post is getting a bit long, so I’m going to stop here, and continue on in a second post.

Stay tuned for the exciting conclusion!

Integrating Pedagogical Code Reviews into a CS 1 Course: An Empirical Study

Integrating Pedagogical Code Reviews into a CS 1 Course:
An Empirical Study

by Christopher Hundhausen, Anukrati Agrawal, Dana Fairbrother, and Michael Trevisan

Here’s another paper talking about using code review in a first year programming course.  Let’s give it a peek.

The idea of this paper is to introduce a type of “studio-based” learning environment for programmers.  The term originates from architecture, where architecture students bring various designs into class, and then the classroom as a unit critiques and evaluates the design.  In the studio-based learning model, the instructor guides the critiques.  This way, not only does the critiqued student get valuable feedback, but the critics also learn about what to critique.  Remember when I talked about learning how to read code? Studio-based learning seems to be one solution to that problem.

Studio-based learning could also help foster community among the students.  In a field where isolated cubicles are still quite normal (I’m sitting in one right now!), a sense of community could be a welcome relief.

The paper also notes the following:

…the peer review process can (a) prepare students to deal with criticism [1], teach students to provide constructive criticism to others [1], provide students with experience with coming to a consensus opinion in cases where opinions differ [1], and build teamwork skills [8] . All of these are “soft” or “people” skills that are likely to be important in their future careers.

I whole-heartedly agree.

The paper then goes on to give a quick survey of how peer review (not just for code, but peer review in general) is currently used in education:

  • pair programming (which is what I experienced at UofT)
  • “in-class conference” model
  • peer feedback meetings (mostly for project documentation)
  • web-based peer grading solutions (RRAS, and Peer Grader)
  • peer code review

Maybe I’m just getting tired of reading these papers, but I found this section particularly difficult to get through.  I had to read it 3 or 4 times just to understand what was going on.

Finally, they get on with it, and talk about their approach to peer code review – a student oriented version of a formal code review.  Yikes – if by formal code review, they mean a Fagan Inspection, then these poor students are in for some long meetings…

Here is their 5 step outline of their approach:

  1. Plan the inspection of a specific piece of code.
  2. Hold a kick-off meeting with an inspection team to distribute the code to be inspected, train the team on the process, and set inspection goals.
  3. Have members of the inspection team inspect the code for defects on their own time.
  4. Hold an inspection meeting to log issues found by individual members’ inspections, and to find additional issues.
  5. Edit the code to address the issues uncovered in the inspection, and verify that the issues have been resolved.

Yep, that sounds a lot like a Fagan Inspection.  As I’ve already found out, there are lighter, faster, and just-as-effective approaches to peer code review…

After using their approach, their study noticed:

  1. A positive trend in the quality of the students code, and a negative trend in the number of defects found per review
  2. Discussion transitioned from syntax and style to more high-level concepts, such as architecture and design of the system.  Thus, meaningful discussion was generated using their technique.  The study notes, however, that “…we
    cannot provide any objective evidence that students were able to provide helpful critiques of each other’s code within the code reviews themselves.”
  3. There is anecdotal evidence that suggests that this work helped to create more of a community feel among the students in the class

Sounds like a lot of anecdotal evidence.

I don’t know, reading the results of this paper, I was disappointed.  The evidence/data they collected feels light and insubstantial.  I certainly support what the study was trying to do, and maybe I’ve read too many peer review papers, but I don’t find it surprising anymore to hear that peer code review improves students code.  It’s good to have evidence for that, but I was hoping for something more.

The paper closes with the writers almost agreeing with me:

There are several limitations to our results that suggest the need for more rigorous follow-up studies with both larger student samples and additional data collection methods.

I knew it wasn’t just me!

Anyhow, the paper finishes with the writers outlining how they’d do future studies.

And that’s that.

Process Improvement of Peer Code Review and Behavior Analysis of its Participants

Process Improvement of Peer Code Review and Behavior Analysis of its Participants

by WANG Yan-qing, LI Yi-jun, Michael Collins, LIU Pei-jie
SIGCSE ’08, March 12-15, 2008

If you’ve been following, I’ve been trying to figure out why code reviews aren’t a part of the basic undergraduate computer science curriculum.  The other papers and articles I’ve read so far have had less to do with the classroom, and more to do with code reviews in industry.

This paper got a little bit closer to the classroom, and, more importantly, closer to my particular question.

To begin, the paper introduces some terminology I’m not familiar with – the software crisis.  I’m familiar with the concept though:  writing good software for large systems is not a simple problem, and as computers become a bigger and more important part of our lives, this inability to easily write good code could quickly end up biting us in the collective rear.

Code review is one of several methods that the software industry has adopted to try to “tame” the software crisis.

I like this part:

Even though code reviews are time consuming, they are much more efficient than testing [19]. A typical engineer, for example,  will find approximately 2 to 4 defects in an hour of unit testing but will find 6 to 10 defects in each hour of review code [19].

What more argument do you need?  It’s just a matter of getting rid of that “time consuming” part, right?  Right…

And this is even juicier:

PCR [peer code review] is a technique which is generally considered to be effective on promoting students’ higher cognitive skills [9], since students use their own knowledge and skill to interpret, analyze and evaluate others’ work to clarify and correct it [2].

Wonderful!  I’m in my problem space!

Reading along, it seems that this paper is introducing a new, refined structure for PCR, and will detail results of a study on using that new structure in a programming course.  Cool.

The introduction ends by saying that the new structure seemed to enhance the quality of student’s work, as well as their ability to critique one another.  Great news!

It’s not all sunshine and puppies, though – they also mention that they ran into a few problems, and that they’ll be discussing those too.

So the first thing they’ve done, is tried to make the terminology clearer:

Roles

  • Author:  the student who writes the code that is being reviewed
  • Reviewer:  the person who is reviewing the code
  • Reviser:  the author, after receiving a Comments Form from a Reviewer
  • Instructor:  the teacher or qualified TA who is responsible for the class

Documents

  • Manuscript Code:  the unrevised code that is first submitted by an Author
  • Comments Form:  the comments given from the Reviewer to the Author
  • Revision Code:  the code that is revised by the Reviser after the Reviewer gives the Reviser the Comments Form (whew…follow that?)
  • Reference Solution:  the “answer” to the assignment, held by the Instructor

Now that we’ve got all the players and documents laid out, let’s take a look at the process:

Process

  • Phase 1:  The Author completes the Manuscript Code
  • Phase 2:  The Author emails the Manuscript Code to the Instructor.  Simultaneously, a blank Comments Form and a copy of the Manuscript Code is sent to a Reviewer
  • Phase 3:  The Reviewer reviews the code as soon as possible, filling in the Comments Form.
  • Phase 4:  The Reviewer sends the completed Comments Form back to the Author, and also sends a carbon copy to the Instructor
  • Phase 5:  After receiving the Comments Form, the Reviser (who was originally the Author…oh boy…almost went cross-eyed, there) makes the appropriate alterations to the original Manuscript Code, referencing the Comment Form where appropriate.  The completed Revision Code is emailed to the Instructor.
  • Phase 6:  The Instructor should now have a copy of the original Manuscript Code, the completed Comments Form, and the final Revision Code.  The Instructor should be able to check that the author and reviewer did their work properly.

Wow.  What a convoluted way of saying something simple.  They even included a diagram, with lots of arrows.  Somehow, I think this could be said simpler.  Oh well.

It also sounds like a lot of emailing.  You’re balancing your course on the reliability of the email protocol?  Errr….

Well, let’s see what problems they ran into…

  1. The assumption that all participants would carefully and responsibly carry out each phase of the process was faulty.  This may have been due to “careless authors, irresponsible reviewers and busy instructors in the review process”.
  2. Some students lack the coding ability to either:
    1. Produce code that is readable and reviewable in a constructive way
    2. Review code in a constructive, or informed way
  3. The process is difficult to control due to the reliance on email (no kidding!)
    1. Some students would not submit Manuscript Code or Comment Forms on time
    2. Some students would submit multiple copies of their Manuscript Code, due to an inherent mistrust of the reliability of email
  4. There was opportunity for students to “game” the process to their advantage. In this particular study, there was very little control of who was doing what.  Though a particular Author was supposed to write the Manuscript Code, this wasn’t enforced, and there was an occasion where another student wrote the code instead.  Same with review writing, and revision writing.  Yeah, cheating is always a problem.

The paper then goes into some discussion about the observed behaviour of Authors and Reviewers.  They noted that most students did not enjoy reviewing very poorly written code, and don’t give their best efforts on reviews for such code.  Mere encouragement from the instructor was not enough to compel them to give their best reviews either.  The paper suggests finding some way of making Reviewers review code more carefully; perhaps through awarding bonus marks.

Behaviour of Instructors was also analyzed.  The paper mentioned that Instructors with large class sizes might try to cut down on their workload by only viewing the Comment Forms that the Reviewers had provided.  But this strategy does not give the Instructor the entire story, and is open to manipulation from students.

The paper ends with a discussion about group formations, and how best to diffuse student cheating conspiracies.

At the last moment, they suggest some “web-based [application] with a built-in blind review mechanism” be developed.

Hm.

Smart Bear, Cisco, and the Largest Study on Code Review Ever

In 2006, Smart Bear software teamed up with the MeetingPlace development group at Cisco Systems, and over 10 months, produced the “largest-ever case study of its kind” on a “light-weight code review process”.

The results of the study can be found in the free book “Best Kept Secrets of Peer Code Review”.

They can also be found in one of the sample chapters that they’ve put on the site.  You can read the study right here, if you’re interested.

Here are my thoughts on the chapter…

First of all, my guard is up a bit. This all seems a bit like a sales pitch, since the software that Cisco ends up using is Smart Bear’s own Code Collaborator. I’m reading the first paragraph, and already I know how it ends – “everybody is happy, the software is improved dramatically, so you should buy Code Collaborator”. Something like that. I’ll be happy when I see some solid data, some numbers, some graphs…

Ok, I’m in at page 54 – they’re talking about how data was collected, and how they pared it down to get the most meaningful results.  This is good.  This sounds like science, and not a sales pitch.  Nice.

The next thing the study talks about is the rate that lines of code (LOC) are analyzed at – the LOC inspection rate.  The data they’ve collected shows no discernible 1-1 correlation between the LOC inspection rate, and the amount of code to inspect.  There were rare exceptions where a reviewer would seem to have such a correlation, but these reviewers tended to be novices who had not participated in many reviews before.  Analyzing the LOC inspection rates by code authors (those who are having their code reviewed) also failed to show any correlations.  In fact, there were several cases where separate reviewers took widely varied amounts of time on the same chunk of code under review.

So this leaves us with no clear answer on what factors play a part in LOC inspection rate.

The study then begins to discuss the effectiveness of the reviews, and whether or not slow reviews reveal more “defects” (where a defect is defined as any change to the code that wouldn’t have happened without the review). Because defect data from the Code Collaborator database was not considered wholly reliable (see pages 62 and 63 if you want to know why), 300 reviews were randomly plucked from the original 2500, and the discussions in each one were analyzed to gather the defect statistics.

The study then introduces the concept of “defect density”, which is a ratio of the number of defects detected per 1000 lines of code (referred to henceforth as kLOC).

I’ll skip right to some results:

Our reviews had an average 32 defects per 1000 lines of code.  61% of the reviews uncovered no defects; of the others the defect density ranged evenly between 10 and 130 defects per kLOC.

I’m surprised that 61% of the reviews found no defects.  That’s remarkably high, in my opinion.  True, it’s only a sample of 300 reviews, but still, my instincts were expecting a significantly lower number.

An even more interesting result, is that defects found (and therefore, review effectiveness) dropped off for large amounts of code to review.  The study notes that:

Anything below 200 lines produces a relatively high rate of defects, often several times the average.

So there seems to be a sweet spot.  I wonder if this is a factor in what was causing the surprising high number of defect-less reviews in that sample of 300.  Perhaps many of those reviews were for large sections of code.  Or perhaps they’re for reviews that involve only a single line of code.  The study doesn’t go into this.

What the study does go into, is a general guideline for limiting the time that code reviews take.  Their study noted a stark dropoff in review effectiveness after about an hour.  Totally understandable – I think an hour reviewing someone else’s code would probably be my limit before I started getting distracted.

The study then goes on to suggest that the “slower is better” approach to reviewing code is the right idea:

Reviewers slower than 400 lines per hour were above average in their ability to uncover defects. But when faster than 450 lines/hour the defect density is below average in 87% of the cases.

So already there are some guidelines: try to review something around 200 LOC, take your time, but don’t go over an hour.  This is useful information.

The study then goes into a test of a slight modification of how reviews are performed: before submitting code for review, the author should annotate the code, describing how the changes are structured, why they were coded the way they were coded, etc.  This has a dual-benefit:  it gives reviewers some clues at how to look at the code (while, hopefully, maintaining the distance they need to do a good job), and it also has the added benefit of getting the author to go over their code again to weed out obvious defects.

So, some reviews were carried out in this fashion.  Here’s what they found:

First, for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all! Second, reviews without author preparation comments are all over the map [in terms of defect density] whereas author-prepared reviews do not share that variability.

The study gives two possible conclusions for these results:

  1. Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.  I’m…skeptical of this conclusion.
  2. Since authors were explaining, or defending their changes, this sabotaged the reviewers ability to do their job effectively.

I find myself believing the second conclusion more, simply from experience:  if somebody is guiding me through things, suddenly I’m in the passenger seat, and I’m less inclined to disagree with a change if their explanation or defense sounds solid.

However, Smart Bear disagrees:

A survey of the reviews in question show the author is being conscientious,  careful, and helpful, and not misleading the reviewer. Often the reviewer will respond or ask a question or open a conversation on
another line of code, demonstrating that he was not dulled by the author’s annotations.

…we believe that requiring preparation will cause anyone to be more careful, rethink their logic, and write better code overall.

I’d like to see their data on this.  In particular, I’d like to see how often reviewers detected defects in lines of code that the author had annotated.  Unfortunately, this data is not provided in the study.

In the last few pages, the study notes that while review size has a detrimental impact on defect density (the number of defects reviewers found per kLOC), there seemed to be a fixed rate on the number of defects found per hour.  While this seems at odds with the original discovery that smaller reviews are more effective, they note:

Although the smaller reviews afforded a few especially high rates, 94% of all reviews had a defect rate under 20 defects per hour regardless of review size.

…the take-home point from Figure 22 is that defect rate is constant across all the reviews regardless of external factors.

So, assume that a reviewer has a steady defect detection rate, but that this rate drops off after about an hour.  Given a small section of code, of course the number of defects detected will be high.  And given a large chunk of code, of course the defects will be more spread out.

It’s the steady defect detection rate that bothers me – you would imagine that the detection rate would depend on the quality of the code and also on the experience of the reviewer.  But I guess, according to this study,  it doesn’t.

And so, the study goes into its conclusions.  I can’t really do much better summarizing the first conclusions for you than they did, so I’ll just regurgitate:

  • LOC under review should be under 200, not to exceed 400. Anything larger overwhelms reviewers and defects are not uncovered.
  • Inspection rates less than 300 LOC/hour result in best defect detection. Rates under 500 are still good; expect to miss significant percentage of defects if faster than that.
  • Authors who prepare the review with annotations and
  • explanations have far fewer defects than those that do not.  We presume the cause to be that authors are forced to self-review the code.
  • Total review time should be less than 60 minutes, not ex- ceed 90. Defect detection rates plummet after that time.
  • Expect defect rates around 15 per hour. Can be higher only with less than 175 LOC under review.
  • Left to their own devices, reviewers’ inspection rate will vary widely, even with similar authors, reviewers, files, and size of the review.

Given these factors, the single best piece of advice we can give is to review between 100 and 300 lines of code at a time and spend 30-60 minutes to review it.  Smaller changes can take less time, but always spend at least 5 minutes, even on a single line of code.

The study then goes into the differences in effectiveness between heavy-duty Fagan-esque code reviews, and the lightweight style of code review that took place at Cisco.  While some of their results from their study exactly match results from studies on heavyweight code reviews (time to spend on a review, when effectiveness drops off), there were some stark differences too.

For example, in the lightweight study, defect detection rate using Smart Bear was 7 times faster than the average rate found across four studies of traditional code review methods.  Sounds like we’re getting into that sales-pitch part…

The study then admits that there was no experiment control – reviews using heavyweight techniques weren’t carried out in parallel with the Code Collaborator study, so comparisons of their effectiveness on the MeetingPlace software have little-to-no data to work with.

In the end, their conclusion is that lightweight code review is just as effective as the traditional methods, while being remarkably faster to boot.  I’d like to see more evidence to back up the comparison on effectiveness, but faster seems more than plausable (Fagan inspections involve very lengthy meetings, so I’ve read).

Smart Bear agrees with my last point on effectiveness comparison, and notes that future study should be conducted where the same set of code is analyzed using both heavy and lightweight methods.

They finish it off with an invitation to software development shops to contact Smart Bear if they’d like to be involved in such a study.

So that’s the gist of it.