Tag Archives: Code Reviews

The Importance of First Impressions: How Theatre Criticism Might Inform Peer Code Review

Discussion Plays

I have seen plays that have very clear stories, and very clear plots.  I leave the theatre knowing what has happened, and I can be pretty confident that the people who sat around me in the theatre all got the same message as I did.

I have also seen plays that are completely the opposite.  There doesn’t appear to be a story.  There doesn’t appear to be plot.  There are no real characters.  For these plays, all of a sudden, I have to do the work in order to make sense of it all.  And you can be pretty sure that every single audience member got something different out of it.

I want to talk about this second kind of play.  For now, I’m going to call this kind of play a discussion play, because for me, the best part about these kinds of plays is the discussion I have with my friends afterwards. We’ll all sit down in a restaurant or a cafe, order some food, and try to figure out what the hell we just saw.  Theories are tossed around.  Everybody brings their own unique impressions and observations to the table.  A very rich ecosystem of ideas develops.

Back to Peer Code Reviews

(trust me, this all ties together in the end)

When Jason Cohen did his Peer Review at Cisco Study, he noticed that code that had been prepared by the author for review seemed to have a lower defect density than code that had not been prepared.

What do I mean by prepared?  I’ll let Jason Cohen explain:

The idea of “author preparation” is that authors should annotate their source code before the review begins.  Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification.  The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient.  Reviewers will uncover problems the author truly would not have thought of otherwise.

(Best Kept Secrets of Peer Code Review, p80-81)

Looking at the data, author preparation does seem to have a palpable effect.  As Cohen notes, “for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all!”.

The study has two explanations for this:

  1. Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.
  2. Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job effectively.

Cohen buys into the first explanation.  He writes:

A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer.  Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.

I have huge respect for this study.  But I don’t entirely buy this explanation.  As Cohen later mentioned in an email to me, this conclusion is not derived from a controlled experiment, and also suffers from selection bias.

Back to those Discussion Plays

One of the worst things that can happen to me before going into a discussion play is for someone who has already seen it to tell me their impressions of what they thought was going on.  As soon as I hear their opinion, my own objectivity is compromised.  Whether I want to or not, I’ll have their impressions in the back of my mind, and I’ll be using it as a measuring stick or reference point for my own opinions and critiques. They’ve carved a cognitive path through the work, and I’m doomed to notice that path, and react to it.

This is horrible.  This limits me.  This more or less hobbles my ability to contribute something unique to the pool of ideas and criticisms in the after-play discussion.  Every impression I have is tainted by someone else’s first impression.

Don’t get me wrong – I love hearing about everyone’s impressions.  But after I have formed my own. This way, I believe we cover more ground.  A group of us watching a discussion play will carve unique cognitive paths through the work without influencing one another.  When we finally open up and present these paths and ideas to one another over food and drink, I believe we cover more ground.

I have no data to back this up.  Only years of theatre-going experience.

A Code Review Anecdote

I recently received an email from a colleague of mine.  She wanted me to go over some of her Javascript to make sure it was up to snuff, since she was relatively new to the language.  I noticed that she had also sent a copy of the email to another developer who has pretty sharp Javascript chops.

When I finally had some free time, I went back to her email to write up the review.  I felt bad – it was late, and the other reviewer hadn’t made a peep on the email thread, and she was hoping to use the code relatively soon.  So I dove in, wrote my review, and sent it off.

A little while later, the other developer sent me his review, saying:

And here was my answer, which I didn’t send to you so as not to influence your reply.  😉

So the author of the code received two unique reviews, and neither of them had influenced the other.  When I read his review, I noticed that we covered some similar ground, but a lot of unique ground as well.  I suspect this wouldn’t have been the case had he sent his review to me first.

The Hypothesis

I hypothesize that author preparation in code review sabotages reviewers abilities to objectively carve their own unique cognitive paths through the code.  They see things from the author’s point of view, and this dulls their critical eye.  Because of this, I believe fewer defects are detected.

I will take this hypothesis one step further.

I suspect any review, by the author or otherwise, will taint future reviews.  If someone has already reviewed some code, I suspect this review will impact and possibly limit the ability of other reviewers to look at the code objectively.  Like author preparation, I suspect this prevents reviewers from getting their own unique, valuable first impressions of the code.  And I suspect that this causes some defects to go undetected.

Testing This Hypothesis

It’s a simple idea really.  Take a chunk of code, and get some number of developers to review it.  Take this same code, add some author preparation comments, and get more developers to review it.  Do all of the normal balancing, etc.

The question:  do the number of detected defects drop?  If so, this looks like evidence that author preparation sabotages review ability.

Take the experiment one step further.  Take some code, have someone else review it, and then have participants review this code, having seen the first review.  What happens to the number and type of defects that they find?  What happens if they don’t see that initial review?  What yields high defect detection?

Sounds doable.  Sounds interesting.  Sounds like something that would answer a few questions.

Implications and Ideas

So what if one or both of my hypotheses are true?  What does this mean for peer code review?

Well, if author preparation alone sabotages review ability, then the answer is simple:  don’t let the authors prepare the review.  The code goes up, and they stay silent.

But what if both are true?

An idea:  how about I tweak MarkUs’s ReviewBoard so that reviewers cannot see what other reviewers have said until they’ve given one review?  What would happen to the defect detection numbers?  Would reviewers react negatively to this?  Would there be lots of repetition in the comments?  Sounds like something worth looking into.

I’d love to hear some thoughts on this.  Anyone?

Pre-Commit Code Review in MarkUs Development

So, for my Master’s thesis, I’ve pretty much set my heart on code review as my research area.  Here’s why:  it works.  It makes your code better.  It helps you find bugs.  And I’m not just quoting something overheard in a pub – there’s evidence to back these claims up.

And then there’s my own experience to boot – the MarkUs team has been using ReviewBoard as our pre-commit code review tool since last summer, and I wouldn’t ever go back.  If I ever have to work in a shop that doesn’t perform code reviews, I’ll campaign my butt off.

Having said all that, pre-commit reviews certainly aren’t for everyone.  Some downsides of pre-commit over post-commit:

  • It goes against “check in early, check in often
  • “The major downside of pre-checkin code review is that it puts a major bottle neck on getting changes into the system for other developers to integrate with early enough.” (from this link)
  • For some applications, testing takes hours on end.  Why wait?  Might as well toss it into the repo, let the Continuous Integration build it, and just see what happens.

There are probably more.

My response:  at least for MarkUs, pre-commit code reviews are working just fine, thank you very much.  At least we’re reviewing it – and any review is better than no review.  But to continue my response, here are a couple of advantages of pre-commit code review for the MarkUs development team:

  1. Since most students working on MarkUs are doing it for half-credits, this means there’s a lot of turnover every semester.  ReviewBoard lowers the chance of our new hotshot developers  accidentally slipping something ridiculous into the repository and having to do that neat Subversion trick of pulling it out again.  This is the obvious one.
  2. It helps all developers keep track of what everyone else is doing.  This is true for post-commit reviews too, but it’s certainly worth the mention.  It sure beats reading SVN log messages…
  3. It’s a great arena for new developers to ask questions.  Our new developers this semester have been very active on our ReviewBoard, asking plenty of questions about things that are showing up in the diffs under review.  Sometimes, “theoretical” code is posted to demonstrate how something would be done.  Post-commit does not support this nicely.
  4. It’s an excellent way of showing how you’re coming along with a task, without the embarrassment of breaking the build.  MarkUs developers sometimes post up “sneak previews” just to give everybody a taste of how their particular task is coming.  This “sneak preview” gives the opportunity for other developers to critique the direction that the submitter is going in, and offer pointers in case they seem to be heading off in a hazardous direction.

Yep, there’s just something so satisfying about seeing all of those little green “ship-it’s”, and then firing off your code into the repository… it’s positive reinforcement for code reviews.  And it’s strangely addictive to me.

Another Idea to Augment This Process

A little while ago, I wrote about what I consider to be one of the Achilles’ Heels of Peer Code Review.  Here’s another one: at least for ReviewBoard, during a review all you’re looking at is the code.  That’s all fine and dandy if you want to look at the logic of the code…but what if you want to try it?  Does trying it out help find more bugs than just looking at it?

Well, at least for MarkUs, it’s helped.  I’ve recently started checking out a fresh copy of MarkUs every time a review request is put up, and splat a copy of the diff under review on top.  I run the test suites, and if they pass, I drive it around.  I try out the new features that the diff supposedly adds, or try to recreate the bug that the diff supposedly fixes.

And I’ve caught a few bugs this way.  This is because ReviewBoard is good at showing me what is in the code, but is bad at telling me what is not there.  And that’s perfectly understandable – it’s not psychic.

So here’s an idea – how about writing a little script that checks ReviewBoard for new review requests.  When it finds one, it checks out a brand new copy of MarkUs, splats down the diff under review over top, runs the tests, and then posts back as a ReviewBoard reviewer how many tests passed, how many failed, etc. If we wanted to get fancy, the script could even do some commenting on the code – maybe using Roodi, Flog, Flay, or some of those other sadistically named Ruby tools to say things about the diff.  The script would be another reviewer.

And then the kicker – the script posts a link in its review where developers can try out a running instance of MarkUs with the applied diff.

Want a fancy name to kick around the office?  Call it pre-commit continuous integration. I just checked – it’s not a common term, but I’m not the first to use it.  Again, so much for being cutting edge.

Would this be useful?  It’s possible that the Roodi/Flog/Flay stuff would bring too much noise to the review process – that’s something to toy with later.  But what about the link to the running instance?  Will that little feature help catch more bugs in MarkUs?  How about for Basie?

I’m curious to find out.

Unfortunately, ReviewBoard doesn’t let me download diffs through its API just yet…if schoolwork lets up for a few days, I’ll look into changing that.

I’d love to hear your thoughts.

Augmenting Code Review Tools: Screen Age and Retina Burn…

Two more ideas to augment the code review process:

Screen Age

Imagine you’re reviewing a piece of code.  There’s something like…500 lines to look at.  So you’re scrolling through the diff, checking things here, checking things there…

And in the background, the review software is recording which sections of the code you’re looking at.

Hm.

I wonder if it’d be a useful metric to know what portions of the code were being looked at the most during a review?  I believe it’s possible (at least within Firefox) to determine the scroll position of a window.  I believe it’s also possible to determine the dimensions of the entire diff page, and the offset coordinates of elements on that page.

So shouldn’t it be possible to determine which elements were on the screen, and for how long?

Imagine seeing a “heat map” over the diff, showing where the reviewer spent most of their time looking.

Jason Cohen wrote that code reviews should take at least 5 minutes, and at most 60-90 minutes.  I wonder if collecting this information would help determine whether or not a review was performed carefully enough.

Now, granted, there are plenty of ways to add noise to the data.  If I’m doing a review, and stand up to go get a sandwich, and then my neighbour visits, etc…my computer sits there, gazing at those elements, and the data is more or less worthless…

Which brings me to:

Retina Burn

Eye tracking is not a new technology.  One of my roommates works at a lab where they do experiments with eye tracking tools.  From what he tells me, the eye tracking gear in the lab is pretty expensive and heavy-weight.

But check this out:

and this:

and this:

So it looks like a webcam and some clever software can do the eye tracking trick too.  This stuff is probably far less accurate than what my roommate uses – but it’s lighter, and cheaper, and therefore more likely to find its way into homes.  So it looks like the day is coming where I’ll eventually be able to use my eyes to control my mouse cursor.

But, more interestingly, this more or less solves the problem with my Screen Age idea:  this technology can tell when you’re looking at the screen.  And it can give a pretty good guess about what area of the screen you’re looking at.

I wonder if collecting this information from code reviewers would be useful – what exact parts of the code have they looked at?  And for how long?  What have they missed?  What did they gloss over?

UPDATE: Karen Reid has also brought to my attention the possibility of using this technology to see how TAs grade assignments by tracking what they’re looking at.  Hm…

Exploring Peer Review in the Computer Science Classroom: Part 2 (Exciting Conclusion)

Now, where was I?

Oh yeah, I was reviewing this paper, and getting right to the good part – the experiment method, data, and results.

I hate to disappoint you, but this section starts as a bit of a downer:

…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.

Yikes.  Oh well, let’s see what happened…

One issue we discovered was that our design was too broad to give definitive results.  We did, however, have enough data to allow us to narrow down the focus for a second study.  From the analysis of one class, we were able to identify two interesting areas for further research.  The type of review appears [sic] have a significant effect on the length and focus of the review.  We also found evidence that students reviewed some of the concepts differently than they did others.  Both of these findings should be explored in the future work.

Good.  At least there’s some groundwork for somebody to do a future study.

Reading on, I’m impressed with the scope with which the writers performed their experiment.  For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60.  Ambitious.  Nice.

While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.

However, maybe I spoke too soon:

The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”

Hm.  Doesn’t sound that great.

Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:

The number of responses to the questionnaires was low.  Three classes had no post-questionnaire responses at all.  Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…

Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study.  Nice.  But, again, it wasn’t all flowers and hugs:

While the amount of data was large, it is also incomplete.  Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to.  Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis.  One class provided no viable information at all…

Things just seem to get worse and worse for these guys.

And, not to kick them while they’re down, but their writing seems to get worse and worse too.  I’m noticing more typos and tense errors as I go along.  Maybe the stress was getting to them…

So, what did they find?  Drum roll, please…

Final Results

Surprise!  It’s inconclusive!

It sounds like they found more questions than answers…

Is it type or order (or both) that is causing the effects the [sic] training and peer review?

It took me a little while to figure out what they were asking here.  Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor.  It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step.  But the peer review step tended to produce more relevant comments.  The experimenters have a couple of theories on why that is:

  1. Social pressure could cause students to be less verbose on their critiques on one another.
  2. Increased learning after the training exercise could cause the students to be more succinct and precise in their reviews
  3. Students were more engaged in the training exercise, and felt more inclined to be more verbose (even though they were less precise)

Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment.  So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.

They had questions on other things too…

Why are there differences in how concepts are reviewed?

  • Are there differences in conceptual difficulty?
  • Do the reviews improve student learning of these concepts?

The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation.  The experimenters also theorized that successful reviews go through the following steps:

  1. Analysis
  2. Evaluation
  3. Explanation
  4. Revision

(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention.  They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.

More questions they brought up…

Is reviewing an engaging and interesting task in computer science?

Very good question.  The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process.  They note that it is important to look at these attitudes over time for improvements or problems.

Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?

I’ll let them explain this one:

While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do.  We require some sort of baseline to compare our efforts to.  We need a control group in our experiments in order to judge effectiveness.

And then the paper pretty much ends.

Where To Go From Here

The authors do a good job of lining up some interesting questions towards the end.  I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.

Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.

Exploring Peer Review in the Computer Science Classroom: Part 1

Exploring Peer Review in the Computer Science Classroom

by Scott Turner and Manuel A. Pérez Quiñones

I’m new at reading papers, so I’ve gotten used to 5 or 10 page-ers.  This looks like the big one, though – 69 pages.

I assume they have something significant to say.  The title sure sounds interesting, especially considering what I’m looking for.

Anyhow, it’s a big paper, and there’s a lot to go through.  Let’s get started.

Right off the bat, I can see that they’re interested in the same problem that I am:

Peer review, while it has many known benefits (Zeller 2000; Papalaskari 2003; Wolfe 2004; Hamer, Ma et al. 2005; Trytten 2005), and is used extensively in other fields (Falchikov and Goldfinch 2000; Topping, Smith et al. 2000; Liu and Hansen 2002; Dossin 2003; Carlson and Berry 2005) and in industry (Anderson and Shneiderman 1977; Anewalt 2005; Hundhausen, Agrawal et al. 2009), is not as widely used in the computer science curriculum.  This may be due, in part, to a lack of information about what, who and when to review in order to achieve specific goals in computer science.

This next part got my attention:

That is not to say that the literature is silent on these issues.  The studies make these choices but there are few reasons given for the decisions and even fewer comparisons performed to show relative value of those options.

Holy smokes.  A pretty bold critique of those previous papers (at least one of which, I’ve already reviewed).  It sounds like Scott and Manuel were as disappointed in some of the peer code review literature as I was.  If I was part of an audience that was being read this paper aloud, I would “wooo!” at this point.

What is needed is a clearer understanding of the requirements that the discipline imposes on the peer review process so that it may be effectively used.

Cool – I’m looking forward to seeing what they dug up.

Reading onwards through the introduction, I’m seeing the same basic arguments for peer code review that I’ve seen elsewhere.  I’ll summarize:

  • PCR (peer code review)  involves active use of higher levels of Bloom’s Taxonomy:  synthesis, analysis and evaluation, both for reviewers and review-ees
  • PCR prepares students for industry, since code review is (or should be) a common part of professional software development

Soon after, a good point is brought up – PCR is potentially a beneficial learning activity, but it all depends on the goals of a particular assignment.  A particular review structure may be better for improving code quality, and another might be better for increasing student motivation.  These considerations need to be taken into account when choosing a PCR structure.

By PCR structure, I think the writers mean:

  • What is being reviewed – source code vs design diagrams, for example
  • Who is being reviewed – students could review one another, or they could all review a piece of code provided by the instructor
  • When the review occurs – what level of students should take part in PCR?  Is there a minimum grade level that must be achieved before PCR is effective?  And when, during a project, should PCR happen?  Early in the design process, or after-the-fact – like a “code autopsy”?

These are good questions.  No wonder this paper is so long – I think their experiment is going to try to answer them all.

And just when things are starting to get good…they go into a literature review.  Yech.  I know it’s important to lay the groundwork, but I think I just felt my eyes turn to oatmeal.

The literature starts by briefly listing off those sources again – past papers who have tried to deal with this topic.  They categorize them based on what the papers try to deliver, and then they give them a light slam for not having a scientific basis on which to form the structures of their PCR structures.  I heartily agree.

The first part of the literature review discusses the benefits of peer review in other fields.  Papers as far back as the 1950’s are cited.  I think it goes without saying that having other people critique your work can be a great way of receiving constructive feedback (ask any playwrite, for example).  I guess these fellas feel they have to be pretty rigorous though, and really ground their argument in some solid past work.  Power to them.

An interesting notion is brought up – peer reviews over an extended length of time within the same groups helps cultivate “interaction between…peers and for the building of knowledge”.  One-time reviews, however, “[lends] itself more to a cognitive approach…more attention can be paid to the changes in the students’ thought processes”.  Hm.

This fellow named Topping seems to be quite popular with these guys.  Apparently, he came up with something called “Topping’s Peer Review Topology”, and I get the feeling that he is one of their primary sources in figuring out different ways of constructing PCR structures.

Oh, and an important morsel just got snuck in:

We are interested in how the student is affected by the acts of reviewing and being reviewed rather than by the social interaction occurring during the process.

So that sense of community that that other paper was talking about – not being looked at here.

The paper then goes on to chisel down some of the jargon from Topping’s Topology, and make it fit the field of Computer Science.  By doing this, the writers simpy reiterate the variables they’re going to be playing with:  what, who, and when.

The paper then dives into a two page summary of some peer review papers, and the various results that they found.  They note a few instances where peer review seemed to improve student performance, and other cases where peer review resulted in semi-disaster.  There are just as many theories as there are conflicting accounts.  From reading this stuff, it seems that peer review is a vast and complicated topic, and nobody seems to really have a firm grasp on it just yet.

Likewise, rubric creation seems to be a bit of a contentious topic:

While there seems to be a general consensus that rubrics are important and that they improve the peer review activity, there is not as much agreement on how they should be implemented.

However, it is clear that rubrics are useful in peer review for novice reviewers:

Rubrics can supply the guidance students need to learn how to evaluate an assignment.  It provides the needed scaffolding until the students are comfortable witht he process and the domain to make correct judgements.

Makes sense to me.

The paper then asks an important question: what makes a “successful” review in the education context?  Greater understanding?  Learning new concepts?  Improved grades?  Better designs?  Fewer code defects?

The answer:  it really depends on the instructor, and what the course wants from the PCR process.  For example, what is more important – having the reviewer learn to review?  Or having the review-ee receive good feedback?  Or both?  PCR is complex, and has lots of things to consider…

The next section highlights how technology is used to support peer review.  One particularly interesting example, is of a Moodle module that allows for peer reviews on assignments.  Apparently, the authors are fans.  I’ve never used Moodle before, and haven’t yet found the module that they’re talking about, but it sounds worth investigation.

The very next section details their experiment – their method, their data, and their results.  However, this post is getting a bit long, so I’m going to stop here, and continue on in a second post.

Stay tuned for the exciting conclusion!