by Christopher Hundhausen, Anukrati Agrawal, Dana Fairbrother, and Michael Trevisan
Here’s another paper talking about using code review in a first year programming course. Let’s give it a peek.
The idea of this paper is to introduce a type of “studio-based” learning environment for programmers. The term originates from architecture, where architecture students bring various designs into class, and then the classroom as a unit critiques and evaluates the design. In the studio-based learning model, the instructor guides the critiques. This way, not only does the critiqued student get valuable feedback, but the critics also learn about what to critique. Remember when I talked about learning how to read code? Studio-based learning seems to be one solution to that problem.
Studio-based learning could also help foster community among the students. In a field where isolated cubicles are still quite normal (I’m sitting in one right now!), a sense of community could be a welcome relief.
The paper also notes the following:
…the peer review process can (a) prepare students to deal with criticism [1], teach students to provide constructive criticism to others [1], provide students with experience with coming to a consensus opinion in cases where opinions differ [1], and build teamwork skills [8] . All of these are “soft” or “people” skills that are likely to be important in their future careers.
I whole-heartedly agree.
The paper then goes on to give a quick survey of how peer review (not just for code, but peer review in general) is currently used in education:
- pair programming (which is what I experienced at UofT)
- “in-class conference” model
- peer feedback meetings (mostly for project documentation)
- web-based peer grading solutions (RRAS, and Peer Grader)
- peer code review
Maybe I’m just getting tired of reading these papers, but I found this section particularly difficult to get through. I had to read it 3 or 4 times just to understand what was going on.
Finally, they get on with it, and talk about their approach to peer code review – a student oriented version of a formal code review. Yikes – if by formal code review, they mean a Fagan Inspection, then these poor students are in for some long meetings…
Here is their 5 step outline of their approach:
- Plan the inspection of a specific piece of code.
- Hold a kick-off meeting with an inspection team to distribute the code to be inspected, train the team on the process, and set inspection goals.
- Have members of the inspection team inspect the code for defects on their own time.
- Hold an inspection meeting to log issues found by individual members’ inspections, and to find additional issues.
- Edit the code to address the issues uncovered in the inspection, and verify that the issues have been resolved.
Yep, that sounds a lot like a Fagan Inspection. As I’ve already found out, there are lighter, faster, and just-as-effective approaches to peer code review…
After using their approach, their study noticed:
- A positive trend in the quality of the students code, and a negative trend in the number of defects found per review
- Discussion transitioned from syntax and style to more high-level concepts, such as architecture and design of the system. Thus, meaningful discussion was generated using their technique. The study notes, however, that “…we
cannot provide any objective evidence that students were able to provide helpful critiques of each other’s code within the code reviews themselves.”
- There is anecdotal evidence that suggests that this work helped to create more of a community feel among the students in the class
Sounds like a lot of anecdotal evidence.
I don’t know, reading the results of this paper, I was disappointed. The evidence/data they collected feels light and insubstantial. I certainly support what the study was trying to do, and maybe I’ve read too many peer review papers, but I don’t find it surprising anymore to hear that peer code review improves students code. It’s good to have evidence for that, but I was hoping for something more.
The paper closes with the writers almost agreeing with me:
There are several limitations to our results that suggest the need for more rigorous follow-up studies with both larger student samples and additional data collection methods.
I knew it wasn’t just me!
Anyhow, the paper finishes with the writers outlining how they’d do future studies.
And that’s that.