Now, where was I?
Oh yeah, I was reviewing this paper, and getting right to the good part – the experiment method, data, and results.
I hate to disappoint you, but this section starts as a bit of a downer:
…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.
Yikes. Oh well, let’s see what happened…
One issue we discovered was that our design was too broad to give definitive results. We did, however, have enough data to allow us to narrow down the focus for a second study. From the analysis of one class, we were able to identify two interesting areas for further research. The type of review appears [sic] have a significant effect on the length and focus of the review. We also found evidence that students reviewed some of the concepts differently than they did others. Both of these findings should be explored in the future work.
Good. At least there’s some groundwork for somebody to do a future study.
Reading on, I’m impressed with the scope with which the writers performed their experiment. For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60. Ambitious. Nice.
While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.
However, maybe I spoke too soon:
The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”
Hm. Doesn’t sound that great.
Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:
The number of responses to the questionnaires was low. Three classes had no post-questionnaire responses at all. Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…
Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study. Nice. But, again, it wasn’t all flowers and hugs:
While the amount of data was large, it is also incomplete. Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to. Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis. One class provided no viable information at all…
Things just seem to get worse and worse for these guys.
And, not to kick them while they’re down, but their writing seems to get worse and worse too. I’m noticing more typos and tense errors as I go along. Maybe the stress was getting to them…
So, what did they find? Drum roll, please…
Final Results
Surprise! It’s inconclusive!
It sounds like they found more questions than answers…
Is it type or order (or both) that is causing the effects the [sic] training and peer review?
It took me a little while to figure out what they were asking here. Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor. It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step. But the peer review step tended to produce more relevant comments. The experimenters have a couple of theories on why that is:
- Social pressure could cause students to be less verbose on their critiques on one another.
- Increased learning after the training exercise could cause the students to be more succinct and precise in their reviews
- Students were more engaged in the training exercise, and felt more inclined to be more verbose (even though they were less precise)
Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment. So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.
They had questions on other things too…
Why are there differences in how concepts are reviewed?
- Are there differences in conceptual difficulty?
- Do the reviews improve student learning of these concepts?
The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation. The experimenters also theorized that successful reviews go through the following steps:
- Analysis
- Evaluation
- Explanation
- Revision
(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention. They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.
More questions they brought up…
Is reviewing an engaging and interesting task in computer science?
Very good question. The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process. They note that it is important to look at these attitudes over time for improvements or problems.
Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?
I’ll let them explain this one:
While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do. We require some sort of baseline to compare our efforts to. We need a control group in our experiments in order to judge effectiveness.
And then the paper pretty much ends.
Where To Go From Here
The authors do a good job of lining up some interesting questions towards the end. I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.
Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.
If nothing else, it’s giving you a feel for how hard it is to do a good study…
Hey Mike,
We did the largest study of peer review ever (2500 reviews over 10 months) at Cisco Systems.
Read all about it here: http://smartbear.com/docs/book/code-review-cisco-case-study.pdf
So true (like Greg says) it’s hard to do studies properly.
@Jason:
I know! It’s an impressive study! I did a little review on what I found about it on your website:
http://mikeconley.ca/blog/2009/09/14/smart-bear-cisco-and-the-largest-study-on-code-review-ever/
And my book showed up a few days ago. Thanks for the free copy!
Yes, it seems that good, solid studies come once in a blue moon. Not to imply that the scientists aren’t trying hard enough – it just shows that certain areas of study need more…study.
At least for most of the papers I’ve read, I’ve learned something – a question, a technique, a name to follow up on, etc.
So I would say that studies like these are not failures, just steps into unexpected, unexplored directions.
Great to hear from you,
-Mike