Tag Archives: jason cohen

Research Experiment: A Recap

Before I start diving into results, I’m just going to recap my experiment so we’re all up to speed.

I’ll try to keep it short, sweet, and punchy – but remember, this is a couple of months of work right here.

Ready?  Here we go.

What I was looking for

A quick refresher on what code review is

Code review is like the software industry equivalent of a taste test.  A developer makes a change to a piece of software, puts that change up for review, and a few reviewers take a look at that change to make sure it’s up to snuff.  If some issues are found during the course of the review, the developer can go back and make revisions.  Once the reviewers give it the thumbs up, the change is put into the software.

That’s an oversimplified description of code review,  but it’ll do for now.

So what?

What’s important is to know that it works. Jason Cohen showed that code review reduces the number of defects that enter the final software product. That’s great!

But there are some other cool advantages to doing code review as well.

  1. It helps to train up new hires.  They can lurk during reviews to see how more experienced developers look at the code.  They get to see what’s happening in other parts of the software.  They get their code reviewed, which means direct, applicable feedback.  All good things.
  2. It helps to clean and homogenize the code.  Since the code will be seen by their peers, developers are generally compelled to not put up “embarrassing” code (or, if they do, to at least try to explain why they did).  Code review is a great way to compel developers to keep their code readable and consistent.
  3. It helps to spread knowledge and good practices around the team.  New hires aren’t the only ones to benefit from code reviews.  There’s always something you can learn from another developer, and code review is where that will happen.  And I believe this is true not just for those who receive the reviews, but also for those who perform the reviews.

That last one is important.  Code review sounds like an excellent teaching tool.

So why isn’t code review part of the standard undergraduate computer science education?  Greg and I hypothesized that the reason that code review isn’t taught is because we don’t know how to teach it.

I’ll quote myself:

What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it?  We don’t know how to fit it in to a curriculum that’s already packed to the brim.  We don’t know how to get students to take it seriously.  We don’t know if there’s pedagogical value, let alone how to show such value to the students.

The idea

Inspired by work by Joordens and Pare, Greg and I developed an approach to teaching code review that integrates itself nicely into the current curriculum.

Here’s the basic idea:

Suppose we have a computer programming class.  Also suppose that after each assignment, each student is randomly presented with anonymized assignment submissions from some of their peers.  Students will then be asked to anonymously peer grade these assignment submissions.

Now, before you go howling your head off about the inadequacy / incompetence of student markers, or the PeerScholar debacle, read this next paragraph, because there’s a twist.

The assignment submissions will still be marked by TA’s as usual.  The grades that a student receives from her peers will not directly affect her mark.  Instead, the student is graded based on how well they graded their peers. The peer reviews that a student completes will be compared with the grades that the TA’s delivered.  The closer a student is to the TA, the better the mark they get on their “peer grading” component (which is distinct from the mark they receive for their programming assignment).

Now, granted, the idea still needs some fleshing out, but already, we’ve got some questions that need answering:

  1. Joordens and Pare showed that for short written assignments, you need about 5 peer reviews to predict the mark that the TA will give.  Is this also true for computer programming assignments?
  2. Grading students based on how much their peer grading matches TA grading assumes that the TA is an infallible point of reference.  How often to TA’s disagree amongst themselves?
  3. Would peer grading like this actually make students better programmers?  Is there a significant difference in the quality of their programming after they perform the grading?
  4. What would students think of peer grading computer programming assignments?  How would they feel about it?

So those were my questions.

How I went about looking for the answers

Here’s the design of the experiment in a nutshell:

Writing phase

I have a treatment group, and a control group.  Both groups are composed of undergraduate students.  After writing a short pre-experiment questionnaire, participants in both groups will have half an hour to work on a short programming assignment.  The treatment group will then have another half an hour to peer grade some submissions for the assignment they just wrote.  The submissions that they mark will be mocked up by me, and will be the same for each participant in the treatment group.  The control group will not perform any grading – instead, they will do an unrelated vocabulary exercise for the same amount of time.  Then, participants in either group will have another half an hour to work on the second short programming assignment. Participants in my treatment group will write a short post-experiment questionnaire to get their impressions on their peer grading experience.  Then the participants are released.

Here’s a picture to help you visualize what you just read.

Tasks for each group in my experiment.

So now I’ve got two piles of submissions – one for each assignment, 60 submissions in total.  I add my mock-ups to each pile.  That means 35 submissions in each pile, and 70 submissions in total.

Marking phase

I assign ID numbers to each submission, shuffle them up, and hand them off to some graduate level TA’s that I hired.  The TA’s will grade each assignment using the same marking rubric that the treatment group used to peer grade.  They will not know if they are grading a treatment group submission, a control group submission, or a mock-up.

Choosing phase

After the grading is completed, I remove the mock-ups, and pair up submissions in both piles based on who wrote it.  So now I’ve got 30 pairs of submissions:  one for each student.  I then ask my graders to look at each pair, knowing that they’re both written by the same student, and to choose which one they think is better coded, and to rate and describe the difference (if any) between the two.  This is an attempt to catch possible improvements in the treatment group’s code that might not be captured in the marking rubric.

So that’s what I did

So everything you’ve just read is what I’ve just finished doing.

Once the submissions are marked, I’ll analyze the marks for the following:

  1. Comparing the two groups, is there any significant improvement in the marks from the first assignment to the second in the treatment group?
    1. If there was an improvement, on which criteria?  And how much of an improvement?
  2. How did the students do at grading my mock-ups?  How similar were their peer grades to what the TAs gave?
  3. How much did my two graders agree with one another?
  4. During the choosing phase, did my graders tend to choose the second assignment over the first assignment more often for the treatment group?

And I’ll also analyze the post-experiment questionnaire to get student feedback on their grading experience.

Ok, so that’s where I’m at.  Stay tuned for results.

Research Proposal #1: The Effects of Author Preparation in Peer Code Review

The Problem Space

Click here to read about my problem space

Related Work

During his study at Cisco Systems, Jason Cohen noticed that review requests with some form of author preparation consistently had fewer defects found in them.

Jason Cohen explains what author preparation is…

The idea of “author preparation” is that authors should annotate their source code before the review begins.  Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification.  The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient.  Reviewers will uncover problems the author truly would not have thought of otherwise.

(Best Kept Secrets of Peer Code Review, p80-81)

Cohen gives two theories to account for the drop in defects:

  1. By performing author preparation, authors were effectively self-reviewing, and removed defects that would normally be found by others.
  2. Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job objectively and effectively.  There is a “blinding effect”.

In his study, Cohen subscribes to the first theory.  He writes:

A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer.  Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.

While it’s certainly possible that Cohen is correct, the evidence to support his claim is tenuous at best, as it suffers from selection bias, and has not been drawn from a properly controlled experiment.

What do I want to do?

I want to design a proper, controlled experiment in an attempt to figure out why exactly the number of found defects drop when authors prepare their review requests.

My experiment is still being designed, but at its simplest:

We devise a review request with several types of bugs intentionally inserted.  We create “author preparation” commentary to go along with the review request.  We show the review request to a series of developers – giving some the author preparation, and some without – and ask the developers to perform a review.

We then take measurement on the number/type/density of the defects that they find.

Why do you care?

If it is shown that author preparation does not negatively affect the number of defects that the reviewers find, this is conclusive evidence to support Cohen’s claim that author preparation is good.  This practice can then be adopted/argued for in order to increase the effectiveness of code reviews.

On the other hand, if it is shown that author preparation negatively affects the number of defects that the reviewers find, this has some interesting consequences.

The obvious one is the conclusion that authors should not prepare their review requests, so as to maximize the number of defects that their reviewers find.

The less obvious one takes the experimental result a step further. Why should this “blinding effect” stop at author preparation?  Perhaps a review by any participant will negatively affect the number of defects found by subsequent reviews?  The experiment will be designed to investigate this possibility as well.

Either way, the benefits or drawbacks of author preparation will hopefully be revealed, to the betterment of the code review process.

Research Proposal: My Problem Space

I want to talk about peer code review.

The code inspection process was formally brought to light by Michael Fagan in the 1970’s, when he showed that code inspection improves the quality of source code. Code inspection, coupled with rigorous testing / QA, helps to reduce the number of defects in a piece of software before it is releasedwhich is really the cheapest time to find and fix those defects.

Jason Cohen took Fagan’s inspection technique out of the conference room, and helped to bring it online.  After a study at Cisco Systems, he found (among other things) that light-weight code reviews were just as (or more) effective as Fagan inspections, and took less time.

There are a myriad of light-weight peer code review tools available now.  Code review has become more of a common software development practice.*

That’s really great.  But how can we make it better? Here are some research project proposals…

*For more information on code review, I’ve written ad nauseum about it…

The Importance of First Impressions: How Theatre Criticism Might Inform Peer Code Review

Discussion Plays

I have seen plays that have very clear stories, and very clear plots.  I leave the theatre knowing what has happened, and I can be pretty confident that the people who sat around me in the theatre all got the same message as I did.

I have also seen plays that are completely the opposite.  There doesn’t appear to be a story.  There doesn’t appear to be plot.  There are no real characters.  For these plays, all of a sudden, I have to do the work in order to make sense of it all.  And you can be pretty sure that every single audience member got something different out of it.

I want to talk about this second kind of play.  For now, I’m going to call this kind of play a discussion play, because for me, the best part about these kinds of plays is the discussion I have with my friends afterwards. We’ll all sit down in a restaurant or a cafe, order some food, and try to figure out what the hell we just saw.  Theories are tossed around.  Everybody brings their own unique impressions and observations to the table.  A very rich ecosystem of ideas develops.

Back to Peer Code Reviews

(trust me, this all ties together in the end)

When Jason Cohen did his Peer Review at Cisco Study, he noticed that code that had been prepared by the author for review seemed to have a lower defect density than code that had not been prepared.

What do I mean by prepared?  I’ll let Jason Cohen explain:

The idea of “author preparation” is that authors should annotate their source code before the review begins.  Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification.  The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient.  Reviewers will uncover problems the author truly would not have thought of otherwise.

(Best Kept Secrets of Peer Code Review, p80-81)

Looking at the data, author preparation does seem to have a palpable effect.  As Cohen notes, “for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all!”.

The study has two explanations for this:

  1. Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.
  2. Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job effectively.

Cohen buys into the first explanation.  He writes:

A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer.  Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.

I have huge respect for this study.  But I don’t entirely buy this explanation.  As Cohen later mentioned in an email to me, this conclusion is not derived from a controlled experiment, and also suffers from selection bias.

Back to those Discussion Plays

One of the worst things that can happen to me before going into a discussion play is for someone who has already seen it to tell me their impressions of what they thought was going on.  As soon as I hear their opinion, my own objectivity is compromised.  Whether I want to or not, I’ll have their impressions in the back of my mind, and I’ll be using it as a measuring stick or reference point for my own opinions and critiques. They’ve carved a cognitive path through the work, and I’m doomed to notice that path, and react to it.

This is horrible.  This limits me.  This more or less hobbles my ability to contribute something unique to the pool of ideas and criticisms in the after-play discussion.  Every impression I have is tainted by someone else’s first impression.

Don’t get me wrong – I love hearing about everyone’s impressions.  But after I have formed my own. This way, I believe we cover more ground.  A group of us watching a discussion play will carve unique cognitive paths through the work without influencing one another.  When we finally open up and present these paths and ideas to one another over food and drink, I believe we cover more ground.

I have no data to back this up.  Only years of theatre-going experience.

A Code Review Anecdote

I recently received an email from a colleague of mine.  She wanted me to go over some of her Javascript to make sure it was up to snuff, since she was relatively new to the language.  I noticed that she had also sent a copy of the email to another developer who has pretty sharp Javascript chops.

When I finally had some free time, I went back to her email to write up the review.  I felt bad – it was late, and the other reviewer hadn’t made a peep on the email thread, and she was hoping to use the code relatively soon.  So I dove in, wrote my review, and sent it off.

A little while later, the other developer sent me his review, saying:

And here was my answer, which I didn’t send to you so as not to influence your reply.  😉

So the author of the code received two unique reviews, and neither of them had influenced the other.  When I read his review, I noticed that we covered some similar ground, but a lot of unique ground as well.  I suspect this wouldn’t have been the case had he sent his review to me first.

The Hypothesis

I hypothesize that author preparation in code review sabotages reviewers abilities to objectively carve their own unique cognitive paths through the code.  They see things from the author’s point of view, and this dulls their critical eye.  Because of this, I believe fewer defects are detected.

I will take this hypothesis one step further.

I suspect any review, by the author or otherwise, will taint future reviews.  If someone has already reviewed some code, I suspect this review will impact and possibly limit the ability of other reviewers to look at the code objectively.  Like author preparation, I suspect this prevents reviewers from getting their own unique, valuable first impressions of the code.  And I suspect that this causes some defects to go undetected.

Testing This Hypothesis

It’s a simple idea really.  Take a chunk of code, and get some number of developers to review it.  Take this same code, add some author preparation comments, and get more developers to review it.  Do all of the normal balancing, etc.

The question:  do the number of detected defects drop?  If so, this looks like evidence that author preparation sabotages review ability.

Take the experiment one step further.  Take some code, have someone else review it, and then have participants review this code, having seen the first review.  What happens to the number and type of defects that they find?  What happens if they don’t see that initial review?  What yields high defect detection?

Sounds doable.  Sounds interesting.  Sounds like something that would answer a few questions.

Implications and Ideas

So what if one or both of my hypotheses are true?  What does this mean for peer code review?

Well, if author preparation alone sabotages review ability, then the answer is simple:  don’t let the authors prepare the review.  The code goes up, and they stay silent.

But what if both are true?

An idea:  how about I tweak MarkUs’s ReviewBoard so that reviewers cannot see what other reviewers have said until they’ve given one review?  What would happen to the defect detection numbers?  Would reviewers react negatively to this?  Would there be lots of repetition in the comments?  Sounds like something worth looking into.

I’d love to hear some thoughts on this.  Anyone?

Taking a peek at Jason Cohen, Smart Bear, and Code Collaborator

Check this out.

Hang on, hang on, let me back up.

Jason Cohen.  Is that name familiar?  If it isn’t, this is the guy who founded Smart Bear Software.  Smart Bear Software has a piece of software called Code Collaborator, which is a “web-based tool that simplifies and expedites peer code reviews.”  Here’s the sales pitch.

The reason I’m posting all of this is because of what I’m looking for – papers concerning code review, and the relationship (or lack thereof) between code review techniques and computer science education.

Which all traces back to that first link I posted – a series of whitepapers and articles on Code Collaborator, and code reviews in general.  This is good stuff.  Sure, it’s not really oriented around computer science education, but these people seem to know what they’re talking about.

There’s even a free book, which details their massive code review study, which is, apparently, the largest-ever case study of its kind.

I’ve ordered the free book, just for kicks.  In the mean time, I’m going to glance over their Cisco study.