- alertCheck just went public on Mozilla Add-ons after being reviewed by an add-ons editor
- As of this writing, alertCheck has 740 downloads
- As of this writing, alertCheck has 4 reviews – all positive
Not bad for my first add-on!
The personal blog of a Toronto based software developer, musician, sound designer, and theatre enthusiast.
Archive for September 2009
Not bad for my first add-on!
A while back, I went on a little rant about how CS students don’t learn how to read and critique code, and that because of this, they’re missing out on a huge opportunity for learning. They’re not exercising their abilities to comprehend other people’s code.
But let’s step back for a second. Forget code for a minute, and let’s just think about reading in general.
The take home message: reading is not a skill that can be sharpened in a vacuum. One of the keys to increasing reading comprehension is to increase the variety of reading material to practice with. This will help build the foundation needed to understand more sophisticated material.
Actually, this really doesn’t come as a surprise to me. In the Drama department, before diving into a new script, we are encouraged to briefly study the time period (slang, accents, and turns of phrase in particular) and the subject matter of the play. This builds a foundation of context, so that the script makes some sense the first time it is read. And the first time a script is read is very important, because for me, it lays down many of the assumptions that I carry throughout my time working with the play. Every subsequent reading of that play builds off of the first reading.
So let’s direct this back to reading source code.
The idea of studying code as art is intruiging…maybe I’m not the only one who thinks of it like sculpture.
Now, where was I?
I hate to disappoint you, but this section starts as a bit of a downer:
…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.
Yikes. Oh well, let’s see what happened…
One issue we discovered was that our design was too broad to give definitive results. We did, however, have enough data to allow us to narrow down the focus for a second study. From the analysis of one class, we were able to identify two interesting areas for further research. The type of review appears [sic] have a significant effect on the length and focus of the review. We also found evidence that students reviewed some of the concepts differently than they did others. Both of these findings should be explored in the future work.
Good. At least there’s some groundwork for somebody to do a future study.
Reading on, I’m impressed with the scope with which the writers performed their experiment. For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60. Ambitious. Nice.
While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.
However, maybe I spoke too soon:
The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”
Hm. Doesn’t sound that great.
Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:
The number of responses to the questionnaires was low. Three classes had no post-questionnaire responses at all. Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…
Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study. Nice. But, again, it wasn’t all flowers and hugs:
While the amount of data was large, it is also incomplete. Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to. Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis. One class provided no viable information at all…
Things just seem to get worse and worse for these guys.
And, not to kick them while they’re down, but their writing seems to get worse and worse too. I’m noticing more typos and tense errors as I go along. Maybe the stress was getting to them…
So, what did they find? Drum roll, please…
Surprise! It’s inconclusive!
It sounds like they found more questions than answers…
Is it type or order (or both) that is causing the effects the [sic] training and peer review?
It took me a little while to figure out what they were asking here. Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor. It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step. But the peer review step tended to produce more relevant comments. The experimenters have a couple of theories on why that is:
Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment. So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.
They had questions on other things too…
Why are there differences in how concepts are reviewed?
- Are there differences in conceptual difficulty?
- Do the reviews improve student learning of these concepts?
The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation. The experimenters also theorized that successful reviews go through the following steps:
(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention. They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.
More questions they brought up…
Is reviewing an engaging and interesting task in computer science?
Very good question. The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process. They note that it is important to look at these attitudes over time for improvements or problems.
Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?
I’ll let them explain this one:
While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do. We require some sort of baseline to compare our efforts to. We need a control group in our experiments in order to judge effectiveness.
And then the paper pretty much ends.
The authors do a good job of lining up some interesting questions towards the end. I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.
Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.
I’m new at reading papers, so I’ve gotten used to 5 or 10 page-ers. This looks like the big one, though – 69 pages.
I assume they have something significant to say. The title sure sounds interesting, especially considering what I’m looking for.
Anyhow, it’s a big paper, and there’s a lot to go through. Let’s get started.
Right off the bat, I can see that they’re interested in the same problem that I am:
Peer review, while it has many known benefits (Zeller 2000; Papalaskari 2003; Wolfe 2004; Hamer, Ma et al. 2005; Trytten 2005), and is used extensively in other fields (Falchikov and Goldfinch 2000; Topping, Smith et al. 2000; Liu and Hansen 2002; Dossin 2003; Carlson and Berry 2005) and in industry (Anderson and Shneiderman 1977; Anewalt 2005; Hundhausen, Agrawal et al. 2009), is not as widely used in the computer science curriculum. This may be due, in part, to a lack of information about what, who and when to review in order to achieve specific goals in computer science.
This next part got my attention:
That is not to say that the literature is silent on these issues. The studies make these choices but there are few reasons given for the decisions and even fewer comparisons performed to show relative value of those options.
Holy smokes. A pretty bold critique of those previous papers (at least one of which, I’ve already reviewed). It sounds like Scott and Manuel were as disappointed in some of the peer code review literature as I was. If I was part of an audience that was being read this paper aloud, I would “wooo!” at this point.
What is needed is a clearer understanding of the requirements that the discipline imposes on the peer review process so that it may be effectively used.
Cool – I’m looking forward to seeing what they dug up.
Reading onwards through the introduction, I’m seeing the same basic arguments for peer code review that I’ve seen elsewhere. I’ll summarize:
Soon after, a good point is brought up – PCR is potentially a beneficial learning activity, but it all depends on the goals of a particular assignment. A particular review structure may be better for improving code quality, and another might be better for increasing student motivation. These considerations need to be taken into account when choosing a PCR structure.
By PCR structure, I think the writers mean:
These are good questions. No wonder this paper is so long – I think their experiment is going to try to answer them all.
And just when things are starting to get good…they go into a literature review. Yech. I know it’s important to lay the groundwork, but I think I just felt my eyes turn to oatmeal.
The literature starts by briefly listing off those sources again – past papers who have tried to deal with this topic. They categorize them based on what the papers try to deliver, and then they give them a light slam for not having a scientific basis on which to form the structures of their PCR structures. I heartily agree.
The first part of the literature review discusses the benefits of peer review in other fields. Papers as far back as the 1950′s are cited. I think it goes without saying that having other people critique your work can be a great way of receiving constructive feedback (ask any playwrite, for example). I guess these fellas feel they have to be pretty rigorous though, and really ground their argument in some solid past work. Power to them.
An interesting notion is brought up – peer reviews over an extended length of time within the same groups helps cultivate “interaction between…peers and for the building of knowledge”. One-time reviews, however, “[lends] itself more to a cognitive approach…more attention can be paid to the changes in the students’ thought processes”. Hm.
This fellow named Topping seems to be quite popular with these guys. Apparently, he came up with something called “Topping’s Peer Review Topology”, and I get the feeling that he is one of their primary sources in figuring out different ways of constructing PCR structures.
Oh, and an important morsel just got snuck in:
We are interested in how the student is affected by the acts of reviewing and being reviewed rather than by the social interaction occurring during the process.
So that sense of community that that other paper was talking about – not being looked at here.
The paper then goes on to chisel down some of the jargon from Topping’s Topology, and make it fit the field of Computer Science. By doing this, the writers simpy reiterate the variables they’re going to be playing with: what, who, and when.
The paper then dives into a two page summary of some peer review papers, and the various results that they found. They note a few instances where peer review seemed to improve student performance, and other cases where peer review resulted in semi-disaster. There are just as many theories as there are conflicting accounts. From reading this stuff, it seems that peer review is a vast and complicated topic, and nobody seems to really have a firm grasp on it just yet.
Likewise, rubric creation seems to be a bit of a contentious topic:
While there seems to be a general consensus that rubrics are important and that they improve the peer review activity, there is not as much agreement on how they should be implemented.
However, it is clear that rubrics are useful in peer review for novice reviewers:
Rubrics can supply the guidance students need to learn how to evaluate an assignment. It provides the needed scaffolding until the students are comfortable witht he process and the domain to make correct judgements.
Makes sense to me.
The paper then asks an important question: what makes a “successful” review in the education context? Greater understanding? Learning new concepts? Improved grades? Better designs? Fewer code defects?
The answer: it really depends on the instructor, and what the course wants from the PCR process. For example, what is more important – having the reviewer learn to review? Or having the review-ee receive good feedback? Or both? PCR is complex, and has lots of things to consider…
The next section highlights how technology is used to support peer review. One particularly interesting example, is of a Moodle module that allows for peer reviews on assignments. Apparently, the authors are fans. I’ve never used Moodle before, and haven’t yet found the module that they’re talking about, but it sounds worth investigation.
The very next section details their experiment – their method, their data, and their results. However, this post is getting a bit long, so I’m going to stop here, and continue on in a second post.
Here’s another paper talking about using code review in a first year programming course. Let’s give it a peek.
The idea of this paper is to introduce a type of “studio-based” learning environment for programmers. The term originates from architecture, where architecture students bring various designs into class, and then the classroom as a unit critiques and evaluates the design. In the studio-based learning model, the instructor guides the critiques. This way, not only does the critiqued student get valuable feedback, but the critics also learn about what to critique. Remember when I talked about learning how to read code? Studio-based learning seems to be one solution to that problem.
Studio-based learning could also help foster community among the students. In a field where isolated cubicles are still quite normal (I’m sitting in one right now!), a sense of community could be a welcome relief.
The paper also notes the following:
…the peer review process can (a) prepare students to deal with criticism , teach students to provide constructive criticism to others , provide students with experience with coming to a consensus opinion in cases where opinions differ , and build teamwork skills  . All of these are “soft” or “people” skills that are likely to be important in their future careers.
I whole-heartedly agree.
The paper then goes on to give a quick survey of how peer review (not just for code, but peer review in general) is currently used in education:
Maybe I’m just getting tired of reading these papers, but I found this section particularly difficult to get through. I had to read it 3 or 4 times just to understand what was going on.
Finally, they get on with it, and talk about their approach to peer code review – a student oriented version of a formal code review. Yikes – if by formal code review, they mean a Fagan Inspection, then these poor students are in for some long meetings…
Here is their 5 step outline of their approach:
Yep, that sounds a lot like a Fagan Inspection. As I’ve already found out, there are lighter, faster, and just-as-effective approaches to peer code review…
After using their approach, their study noticed:
Sounds like a lot of anecdotal evidence.
I don’t know, reading the results of this paper, I was disappointed. The evidence/data they collected feels light and insubstantial. I certainly support what the study was trying to do, and maybe I’ve read too many peer review papers, but I don’t find it surprising anymore to hear that peer code review improves students code. It’s good to have evidence for that, but I was hoping for something more.
The paper closes with the writers almost agreeing with me:
There are several limitations to our results that suggest the need for more rigorous follow-up studies with both larger student samples and additional data collection methods.
I knew it wasn’t just me!
Anyhow, the paper finishes with the writers outlining how they’d do future studies.
And that’s that.
If you don’t know this already, I really dig adventure games. Seriously. Just click these words to see how much I dig them.
And I keep running into adventure game stuff in the most unexpected places. A few days ago, Yuri Takhteyev from the Faculty of Information spoke to the Software Engineering group about his work studying the use and popularity of the Lua language in Rio de Janeiro, Brazil. When he brought up Lua, I couldn’t help remembering that Lua was used by the GrimE engine to script Grim Fandango…
Wouldn’t it be awesome to find a way of turning my passion for adventure games into something that is useful in the field of Computer Science?
My supervisor has advised me not to think too much about my research paper just yet, and to just peek around to get a feel for what’s going on in the various facets of Computer Science. I take this advice to heart, and yet I can’t help noticing where my passion for adventure games might be applied…
Here are a few things I’ve come up with:
Storytelling Alice is an attempt to find a fun, intuitive way of teaching basic programming with the Alice language to middle-school students. It was designed and developed by Caitlin Kelleher as part of her PhD thesis at Carnegie Mellon. Storytelling Alice is designed to use storytelling as a motivating context to get students to learn various programming techniques.
In Storytelling Alice, students are compelled to learn more in order to tell more of a story. I wonder if they’d be willing to learn more to reveal more of a story? This would be very similar to the way adventure games reward players with story after solving a puzzle.
I’ve recently started taking Khai Truong’s CSC2514 – Human-Computer Interaction course. One of the first papers he got us to read this week was one that he’d written on storyboarding.
Put simply, storyboards are used by interaction designers as a low-cost way of testing out designs with their potential audience. They are similar to the storyboards used in writing/designing movies or television productions, but are instead used to communicate use cases, environment of use, physical embodiment of the system, etc.
Here is a copy of the paper, if you’re interested in reading it.
Here’s something I found interesting:
Commercial products marketed specifically for storyboard creation are available, but they are designed for experts and can be difficult for novices to use … Also, expert designers expressed that the greatest challenge for them is storytelling. These software products are not designed to support that process and may even be detrimental to it, because they do not provide complete creative flexibility in terms of what can be developed.
Very interesting. Adventure games are designed from the ground up to tell a story. I wonder if the tools that adventure games are created with could lend something to these storyboard creation tools?
As my studies continue, perhaps I’ll report more potential uses for adventure game technology.
Until then, I’ll leave you with a clip from a playthrough from one of my favourite adventure games of all time, The Dig. It might not be Dicken’s, but damned if it doesn’t hold my attention with an iron grip.
So, yesterday I wrote:
[W]hat can Drama bring to Computer Science?
The easy one is presentation/communication skills. A CS student might be brilliant, but that doesn’t mean they can present or communicate. And if an idea can’t be communicated, it’s worthless.
But what else? Any ideas? I’m going to think about this for a bit, and I’ll see if I can come up with any more.
I posted the question on Twitter, and on my Facebook. I was quite surprised by the amount of feedback I got back – apparently, quite a few people are interested in this topic.
Thanks for everybody who posted, or who came up to talk to me about this! Let me summarize what I heard back:
But now I want to hit the big one. There is one thing that I really think Drama can bring to Computer Science. Drama students are very good at it. From what I can tell, Computer Science students rarely get exposed to it.
That thing is collaboration skills.
I already know that a few of my fellow Drama students will laugh at that – and say, “there are plenty of people in this department without collaboration skills”. Yes, this is true. But they tend not to do very well, or produce anything too interesting.
For me, the best, most exciting stuff comes when I’m with a group, and we’re not sure where we’re going with a project, but we just try things. We all throw a bunch of ideas in the middle, and try to put them on their feet. The most unexpected things can happen.
Two years ago, I took a course in Experimental Theatre. We were broken down into groups of 3 or 4 right at the beginning of the term, and given this challenge – show us what you like to see in theatre. Show us what you think good theatre looks like.
That was it. A blank canvas. No script. No “spec”. Just each other. It felt hopeless at first – we’d improv things, trying to get a feel for what our group wanted to do. Nothing would happen, it’d fall flat. We were lost.
But slowly, something started to piece itself together. We found some material that we wanted to play with (The Wizard of Oz), and a subject that we liked – “home”. What it means to be home, why people leave their homes, why we miss home, why we can’t stand home, what if we can’t get home, etc. We divided the work up into 4 sections – 1 for each of us: Dorothy, Cowardly Lion, Scarecrow, Tin Man.
It’s really hard to describe what we did. The characters and structure from The Wizard of Oz was just a playground for a huge meditation on what “home” meant to different people.
And, wouldn’t you know it, the Robert Dziekanski Taser Incident happened just a week or so before we were to present. It integrated perfectly into our piece.
When we finally presented it, some people were incredulous, others nauseous, others outraged. Some were crying. We had a huge class debate on whether or not it was appropriate to include the film clip of the Taser Incident in our piece.
But a lot of people really got something out of it. And I believe a bunch of people from that class went to a protest rally about the incident that took place only a few days later. I heard a lot of really positive things. We were so excited by it that we almost took it to the Toronto Fringe Festival.
In my opinion, that was one of the most interesting, educational, horrifying, and rewarding art pieces I’d ever been involved in. And it all started from nothing.
When are Computer Science students grouped up, and told to make whatever they want? When are they given total freedom to just go crazy, and come up with something beautiful? Something unique? When are they given the frightening prospect of a blank canvas? Maybe I’m being naive – but where are the collaborative creativity assignments in computer science education?
Now, I can imagine someone shouting – “but what about those group assignments! What about CSC318, or CSC301? Those were collaborative!”.
My friend, thanks for trying, but there’s a distinct difference between group problem solving, and collaborative creation. In my mind, for collaborative creation at its best, the ensemble starts with nothing and must create something from it. It’s the difference between having a script to toy with, and not having a script at all.
And don’t just tell me that an independent study fits the bill. The word “independent” sabotages the whole idea – the key word is collaborate.
Oh, and did I mention that Artful Making sounds like an excellent book? Why don’t you go to their website, and read the forward by Google’s own Dr. Eric Schmidt. I found it very illuminating. I think this is going to the top of my to-read list.
by WANG Yan-qing, LI Yi-jun, Michael Collins, LIU Pei-jie
SIGCSE ’08, March 12-15, 2008
If you’ve been following, I’ve been trying to figure out why code reviews aren’t a part of the basic undergraduate computer science curriculum. The other papers and articles I’ve read so far have had less to do with the classroom, and more to do with code reviews in industry.
This paper got a little bit closer to the classroom, and, more importantly, closer to my particular question.
To begin, the paper introduces some terminology I’m not familiar with – the software crisis. I’m familiar with the concept though: writing good software for large systems is not a simple problem, and as computers become a bigger and more important part of our lives, this inability to easily write good code could quickly end up biting us in the collective rear.
Code review is one of several methods that the software industry has adopted to try to “tame” the software crisis.
I like this part:
Even though code reviews are time consuming, they are much more efficient than testing . A typical engineer, for example, will find approximately 2 to 4 defects in an hour of unit testing but will find 6 to 10 defects in each hour of review code .
What more argument do you need? It’s just a matter of getting rid of that “time consuming” part, right? Right…
And this is even juicier:
PCR [peer code review] is a technique which is generally considered to be effective on promoting students’ higher cognitive skills , since students use their own knowledge and skill to interpret, analyze and evaluate others’ work to clarify and correct it .
Wonderful! I’m in my problem space!
Reading along, it seems that this paper is introducing a new, refined structure for PCR, and will detail results of a study on using that new structure in a programming course. Cool.
The introduction ends by saying that the new structure seemed to enhance the quality of student’s work, as well as their ability to critique one another. Great news!
It’s not all sunshine and puppies, though – they also mention that they ran into a few problems, and that they’ll be discussing those too.
So the first thing they’ve done, is tried to make the terminology clearer:
Now that we’ve got all the players and documents laid out, let’s take a look at the process:
Wow. What a convoluted way of saying something simple. They even included a diagram, with lots of arrows. Somehow, I think this could be said simpler. Oh well.
It also sounds like a lot of emailing. You’re balancing your course on the reliability of the email protocol? Errr….
Well, let’s see what problems they ran into…
The paper then goes into some discussion about the observed behaviour of Authors and Reviewers. They noted that most students did not enjoy reviewing very poorly written code, and don’t give their best efforts on reviews for such code. Mere encouragement from the instructor was not enough to compel them to give their best reviews either. The paper suggests finding some way of making Reviewers review code more carefully; perhaps through awarding bonus marks.
Behaviour of Instructors was also analyzed. The paper mentioned that Instructors with large class sizes might try to cut down on their workload by only viewing the Comment Forms that the Reviewers had provided. But this strategy does not give the Instructor the entire story, and is open to manipulation from students.
The paper ends with a discussion about group formations, and how best to diffuse student cheating conspiracies.
At the last moment, they suggest some “web-based [application] with a built-in blind review mechanism” be developed.
Yesterday, a bunch of Greg Wilson’s grad students had dinner at his place. We got to meet his wife, his daughter, and eat some pretty amazing food. It also gave his new grad students an opportunity to say an official “hello”, and introduce themselves to everybody else.
After introducing myself as having had an undergraduate degree in Computer Science and Drama, somebody made some remark about what an interesting combination that is. Greg replied by saying something like “That’s why I chose him”, and told a story about how one of the best programmers he ever knew was originally training to become a Rabbi, and got into Computer Science because he was working on some translations of ancient texts.
This got me thinking. When I started focusing on both Drama and Computer Science, I remember always finding ways where Computer Science could help Drama. I can easily rattle off a bunch of examples:
So, while I was at the UCDP, all of these ideas rattled around in my head. I’ve now come to the realization that this has been completely one-sided.
So let’s switch it around – what can Drama bring to Computer Science?
The easy one is presentation/communication skills. A CS student might be brilliant, but that doesn’t mean they can present or communicate. And if an idea can’t be communicated, it’s worthless.
But what else? Any ideas? I’m going to think about this for a bit, and I’ll see if I can come up with any more.
UPDATE: So here’s what I found…
In 2006, Smart Bear software teamed up with the MeetingPlace development group at Cisco Systems, and over 10 months, produced the “largest-ever case study of its kind” on a “light-weight code review process”.
The results of the study can be found in the free book “Best Kept Secrets of Peer Code Review”.
They can also be found in one of the sample chapters that they’ve put on the site. You can read the study right here, if you’re interested.
Here are my thoughts on the chapter…
First of all, my guard is up a bit. This all seems a bit like a sales pitch, since the software that Cisco ends up using is Smart Bear’s own Code Collaborator. I’m reading the first paragraph, and already I know how it ends – “everybody is happy, the software is improved dramatically, so you should buy Code Collaborator”. Something like that. I’ll be happy when I see some solid data, some numbers, some graphs…
Ok, I’m in at page 54 – they’re talking about how data was collected, and how they pared it down to get the most meaningful results. This is good. This sounds like science, and not a sales pitch. Nice.
The next thing the study talks about is the rate that lines of code (LOC) are analyzed at – the LOC inspection rate. The data they’ve collected shows no discernible 1-1 correlation between the LOC inspection rate, and the amount of code to inspect. There were rare exceptions where a reviewer would seem to have such a correlation, but these reviewers tended to be novices who had not participated in many reviews before. Analyzing the LOC inspection rates by code authors (those who are having their code reviewed) also failed to show any correlations. In fact, there were several cases where separate reviewers took widely varied amounts of time on the same chunk of code under review.
So this leaves us with no clear answer on what factors play a part in LOC inspection rate.
The study then begins to discuss the effectiveness of the reviews, and whether or not slow reviews reveal more “defects” (where a defect is defined as any change to the code that wouldn’t have happened without the review). Because defect data from the Code Collaborator database was not considered wholly reliable (see pages 62 and 63 if you want to know why), 300 reviews were randomly plucked from the original 2500, and the discussions in each one were analyzed to gather the defect statistics.
The study then introduces the concept of “defect density”, which is a ratio of the number of defects detected per 1000 lines of code (referred to henceforth as kLOC).
I’ll skip right to some results:
Our reviews had an average 32 defects per 1000 lines of code. 61% of the reviews uncovered no defects; of the others the defect density ranged evenly between 10 and 130 defects per kLOC.
I’m surprised that 61% of the reviews found no defects. That’s remarkably high, in my opinion. True, it’s only a sample of 300 reviews, but still, my instincts were expecting a significantly lower number.
An even more interesting result, is that defects found (and therefore, review effectiveness) dropped off for large amounts of code to review. The study notes that:
Anything below 200 lines produces a relatively high rate of defects, often several times the average.
So there seems to be a sweet spot. I wonder if this is a factor in what was causing the surprising high number of defect-less reviews in that sample of 300. Perhaps many of those reviews were for large sections of code. Or perhaps they’re for reviews that involve only a single line of code. The study doesn’t go into this.
What the study does go into, is a general guideline for limiting the time that code reviews take. Their study noted a stark dropoff in review effectiveness after about an hour. Totally understandable – I think an hour reviewing someone else’s code would probably be my limit before I started getting distracted.
The study then goes on to suggest that the “slower is better” approach to reviewing code is the right idea:
Reviewers slower than 400 lines per hour were above average in their ability to uncover defects. But when faster than 450 lines/hour the defect density is below average in 87% of the cases.
So already there are some guidelines: try to review something around 200 LOC, take your time, but don’t go over an hour. This is useful information.
The study then goes into a test of a slight modification of how reviews are performed: before submitting code for review, the author should annotate the code, describing how the changes are structured, why they were coded the way they were coded, etc. This has a dual-benefit: it gives reviewers some clues at how to look at the code (while, hopefully, maintaining the distance they need to do a good job), and it also has the added benefit of getting the author to go over their code again to weed out obvious defects.
So, some reviews were carried out in this fashion. Here’s what they found:
First, for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all! Second, reviews without author preparation comments are all over the map [in terms of defect density] whereas author-prepared reviews do not share that variability.
The study gives two possible conclusions for these results:
I find myself believing the second conclusion more, simply from experience: if somebody is guiding me through things, suddenly I’m in the passenger seat, and I’m less inclined to disagree with a change if their explanation or defense sounds solid.
However, Smart Bear disagrees:
A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer. Often the reviewer will respond or ask a question or open a conversation on
another line of code, demonstrating that he was not dulled by the author’s annotations.
…we believe that requiring preparation will cause anyone to be more careful, rethink their logic, and write better code overall.
I’d like to see their data on this. In particular, I’d like to see how often reviewers detected defects in lines of code that the author had annotated. Unfortunately, this data is not provided in the study.
In the last few pages, the study notes that while review size has a detrimental impact on defect density (the number of defects reviewers found per kLOC), there seemed to be a fixed rate on the number of defects found per hour. While this seems at odds with the original discovery that smaller reviews are more effective, they note:
Although the smaller reviews afforded a few especially high rates, 94% of all reviews had a defect rate under 20 defects per hour regardless of review size.
…the take-home point from Figure 22 is that defect rate is constant across all the reviews regardless of external factors.
So, assume that a reviewer has a steady defect detection rate, but that this rate drops off after about an hour. Given a small section of code, of course the number of defects detected will be high. And given a large chunk of code, of course the defects will be more spread out.
It’s the steady defect detection rate that bothers me – you would imagine that the detection rate would depend on the quality of the code and also on the experience of the reviewer. But I guess, according to this study, it doesn’t.
And so, the study goes into its conclusions. I can’t really do much better summarizing the first conclusions for you than they did, so I’ll just regurgitate:
- LOC under review should be under 200, not to exceed 400. Anything larger overwhelms reviewers and defects are not uncovered.
- Inspection rates less than 300 LOC/hour result in best defect detection. Rates under 500 are still good; expect to miss significant percentage of defects if faster than that.
- Authors who prepare the review with annotations and
- explanations have far fewer defects than those that do not. We presume the cause to be that authors are forced to self-review the code.
- Total review time should be less than 60 minutes, not ex- ceed 90. Defect detection rates plummet after that time.
- Expect defect rates around 15 per hour. Can be higher only with less than 175 LOC under review.
- Left to their own devices, reviewers’ inspection rate will vary widely, even with similar authors, reviewers, files, and size of the review.
Given these factors, the single best piece of advice we can give is to review between 100 and 300 lines of code at a time and spend 30-60 minutes to review it. Smaller changes can take less time, but always spend at least 5 minutes, even on a single line of code.
The study then goes into the differences in effectiveness between heavy-duty Fagan-esque code reviews, and the lightweight style of code review that took place at Cisco. While some of their results from their study exactly match results from studies on heavyweight code reviews (time to spend on a review, when effectiveness drops off), there were some stark differences too.
For example, in the lightweight study, defect detection rate using Smart Bear was 7 times faster than the average rate found across four studies of traditional code review methods. Sounds like we’re getting into that sales-pitch part…
The study then admits that there was no experiment control – reviews using heavyweight techniques weren’t carried out in parallel with the Code Collaborator study, so comparisons of their effectiveness on the MeetingPlace software have little-to-no data to work with.
In the end, their conclusion is that lightweight code review is just as effective as the traditional methods, while being remarkably faster to boot. I’d like to see more evidence to back up the comparison on effectiveness, but faster seems more than plausable (Fagan inspections involve very lengthy meetings, so I’ve read).
Smart Bear agrees with my last point on effectiveness comparison, and notes that future study should be conducted where the same set of code is analyzed using both heavy and lightweight methods.
They finish it off with an invitation to software development shops to contact Smart Bear if they’d like to be involved in such a study.
So that’s the gist of it.