Discussion Plays
I have seen plays that have very clear stories, and very clear plots. I leave the theatre knowing what has happened, and I can be pretty confident that the people who sat around me in the theatre all got the same message as I did.
I have also seen plays that are completely the opposite. There doesn’t appear to be a story. There doesn’t appear to be plot. There are no real characters. For these plays, all of a sudden, I have to do the work in order to make sense of it all. And you can be pretty sure that every single audience member got something different out of it.
I want to talk about this second kind of play. For now, I’m going to call this kind of play a discussion play, because for me, the best part about these kinds of plays is the discussion I have with my friends afterwards. We’ll all sit down in a restaurant or a cafe, order some food, and try to figure out what the hell we just saw. Theories are tossed around. Everybody brings their own unique impressions and observations to the table. A very rich ecosystem of ideas develops.
Back to Peer Code Reviews
(trust me, this all ties together in the end)
When Jason Cohen did his Peer Review at Cisco Study, he noticed that code that had been prepared by the author for review seemed to have a lower defect density than code that had not been prepared.
What do I mean by prepared? I’ll let Jason Cohen explain:
The idea of “author preparation” is that authors should annotate their source code before the review begins. Annotations guide the reviewer through the changes, showing which files to look at first and defending the reason and methods behind each code modification. The theory is that because the author has to re-think all the changes during the annotation process, the author will himself uncover most of the defects before the review even begins, thus making the review itself more efficient. Reviewers will uncover problems the author truly would not have thought of otherwise.
(Best Kept Secrets of Peer Code Review, p80-81)
Looking at the data, author preparation does seem to have a palpable effect. As Cohen notes, “for all reviews with at least one author preparation comment, defects density is never over 30; in fact the most common case is for there to be no defects at all!”.
The study has two explanations for this:
- Authors gave their code such a thorough look while annotating them, that most defects were eliminated right off the bat.
- Since authors were actively explaining, or defending their code, this sabotaged the reviewers ability to do their job effectively.
Cohen buys into the first explanation. He writes:
A survey of the reviews in question show the author is being conscientious, careful, and helpful, and not misleading the reviewer. Often the reviewer will respond to or ask a question or open a conversation on another line of code, demonstrating that he was not dulled by the author’s annotations.
I have huge respect for this study. But I don’t entirely buy this explanation. As Cohen later mentioned in an email to me, this conclusion is not derived from a controlled experiment, and also suffers from selection bias.
Back to those Discussion Plays
One of the worst things that can happen to me before going into a discussion play is for someone who has already seen it to tell me their impressions of what they thought was going on. As soon as I hear their opinion, my own objectivity is compromised. Whether I want to or not, I’ll have their impressions in the back of my mind, and I’ll be using it as a measuring stick or reference point for my own opinions and critiques. They’ve carved a cognitive path through the work, and I’m doomed to notice that path, and react to it.
This is horrible. This limits me. This more or less hobbles my ability to contribute something unique to the pool of ideas and criticisms in the after-play discussion. Every impression I have is tainted by someone else’s first impression.
Don’t get me wrong – I love hearing about everyone’s impressions. But after I have formed my own. This way, I believe we cover more ground. A group of us watching a discussion play will carve unique cognitive paths through the work without influencing one another. When we finally open up and present these paths and ideas to one another over food and drink, I believe we cover more ground.
I have no data to back this up. Only years of theatre-going experience.
A Code Review Anecdote
I recently received an email from a colleague of mine. She wanted me to go over some of her Javascript to make sure it was up to snuff, since she was relatively new to the language. I noticed that she had also sent a copy of the email to another developer who has pretty sharp Javascript chops.
When I finally had some free time, I went back to her email to write up the review. I felt bad – it was late, and the other reviewer hadn’t made a peep on the email thread, and she was hoping to use the code relatively soon. So I dove in, wrote my review, and sent it off.
A little while later, the other developer sent me his review, saying:
And here was my answer, which I didn’t send to you so as not to influence your reply. 😉
So the author of the code received two unique reviews, and neither of them had influenced the other. When I read his review, I noticed that we covered some similar ground, but a lot of unique ground as well. I suspect this wouldn’t have been the case had he sent his review to me first.
The Hypothesis
I hypothesize that author preparation in code review sabotages reviewers abilities to objectively carve their own unique cognitive paths through the code. They see things from the author’s point of view, and this dulls their critical eye. Because of this, I believe fewer defects are detected.
I will take this hypothesis one step further.
I suspect any review, by the author or otherwise, will taint future reviews. If someone has already reviewed some code, I suspect this review will impact and possibly limit the ability of other reviewers to look at the code objectively. Like author preparation, I suspect this prevents reviewers from getting their own unique, valuable first impressions of the code. And I suspect that this causes some defects to go undetected.
Testing This Hypothesis
It’s a simple idea really. Take a chunk of code, and get some number of developers to review it. Take this same code, add some author preparation comments, and get more developers to review it. Do all of the normal balancing, etc.
The question: do the number of detected defects drop? If so, this looks like evidence that author preparation sabotages review ability.
Take the experiment one step further. Take some code, have someone else review it, and then have participants review this code, having seen the first review. What happens to the number and type of defects that they find? What happens if they don’t see that initial review? What yields high defect detection?
Sounds doable. Sounds interesting. Sounds like something that would answer a few questions.
Implications and Ideas
So what if one or both of my hypotheses are true? What does this mean for peer code review?
Well, if author preparation alone sabotages review ability, then the answer is simple: don’t let the authors prepare the review. The code goes up, and they stay silent.
But what if both are true?
An idea: how about I tweak MarkUs’s ReviewBoard so that reviewers cannot see what other reviewers have said until they’ve given one review? What would happen to the defect detection numbers? Would reviewers react negatively to this? Would there be lots of repetition in the comments? Sounds like something worth looking into.
I’d love to hear some thoughts on this. Anyone?
Very interesting stuff! A few thoughts:
1. There are really two different things going on with respect to author preparation: a.) what did the author write and b.) how did the reviewers react? When I am the author of a code review I tend to provide the back story and *not* information about the actual implementation. Example: I’m not much of a PHP wizard, yet I maintain a web site written by someone else in PHP. When I have to go learn a PHP idiom in order to enhance the site, in the resulting code review I’ll annotate the idiom with a comment that indicates “Based on what I’ve seen in the PHP docs and at site this is the correct approach – let me know if I’m way off track.”
Does that prejudice the reviewer? Perhaps, but only in the direction of: “This guy is new to this part of PHP, we need to tread carefully here.”
Even if I were to annotate with comments such as: “The method I’m calling here expects a hash map of this type,” the reaction of the reviewer is what matters. If the reviewer thinks: “Oh, okay, no problem” then that’s a very different end result than: “I’m not gonna’ trust this guy – I’m going to go read the method that is being called.”
2. Your next logical step, that reviewers prejudice each other, is also a possibility, but again to me it depends on the people involved. An additional factor, I suppose, is also the time pressure. I can see how it would be likely for a reviewer to say: “Looks like Joe covered this pretty thoroughly, I’m short on time so I’m gonna’ skim instead of really review.”
3. Tools can help. If some of the reviewers are slackin’ off, a tool that tracks the amount of time they spent in the review will help point that out.
As Jason pointed out: he didn’t set out to study the impact of author preparation, so a controlled experiment would be awesome. Can’t wait to read your results!
@Gregg:
Thanks for the input!
RE 1: True – I may have over-simplified in my post. You’re right: I’ll have to pay attention to what exactly the authors are saying when they’re preparing their code, and *how exactly* the reviews are reacting to it.
RE 3: True, those tools can help. Inspection rate, defect density and defect rate are a decent start, but they only tell me so much. I want more. I want to know *where* my reviewers looked. I want fine granularity. I don’t just want to know how long they spent looking at a particular file – I want to know how long they spent looking at a particular method. I want to know how long they spent looking at an area of code that’s already been heavily reviewed. I want to know how long they spent looking at an area of code that nobody has said a word about. I want a heat map showing how much each area of the code has been looked at.
In short, I want a lot. 😉
Awesome analysis, and I completely agree that a proper experiment of this point would be invaluable, and that the original data here wasn’t experimental at all.
In fact, I think the (self-)selection bias you mention might even be the entire story — more thoughtful developers will probably want to discuss their code.
However there’s one thing I think you should add to your experiment. It’s not enough to measure whether there’s a “drop” in defects, because it’s likely with multiple reviewers to simply find *different* defects. So I think you want to measure specifically WHICH defects each group found rather than the NUMBER of defects, and furthermore if you do want to summarize you should include things like defect severity.
This suggestion comes from experience — in the Cisco study we didn’t measure either of these things and in retrospect it really hampered our ability to make conclusions!
Let me know when you do this study!
Very interesting ideas and discussion here. I think there are two possible experiments you could do here to judge the impact on code quality:
1. As you mentioned, the impact of ‘reviewer influence’ upon one another. I’ve found that in any creative endeavor, the feedback you get from a given person will often change depending on whether they know what another person thinks. It probably has a lot to do with our inate sense of ‘role’ or ‘hierarchy’ within a social group: If a person of strong standing has expressed a certain opinion, then many people are going to be hesitant to express an opposing viewpoint.
2. WRT ‘Reviewee preparation’ another interesting experiment might be to have reviews performed at random vs. reviews performed based on certain criteria. If a developer knows that any given changeset might be subject to review, as opposed to only changesets which meet some deterministic criteria, will that affect the overall code quality? I would suspect that if developers knew that some randomly selected subset of their commits were going to be subjected to a thorough review (I can’t help but think of airport screening as an analogous example), they might be more careful in reviewing all of their commits.
I think there’s a thesis in there 🙂