One of the problems that was brought up with my original idea for teaching code review was that it punishes students twice if they didn’t understand a programming concept.
For example, if a student does not understand what pipes are for and how they work, they’re probably going to do pretty poorly on their pipes assignment in a systems intro course. So there’s one slam for the student.
The second time is when they review their peer’s code. If they still don’t understand how pipes work, their reviews are going to be pretty trashy. And they’ll get a poor mark for that. And that’s the second slam.
The problem here is that the students don’t get any feedback before they go into the peer review process. For the “weaker” students, this essentially means bringing a knife to a tank fight.
So here’s an idea:
- After an assignment due date passes, and the students have submitted their code, the students are randomly placed into groups of 3 or 4.
- Each group is assigned a single random submission from the ones collected from the students
- Each student in their group individually, and privately, performs a review on their assigned submission. They fill out a rubric, make comments, etc. They are not allowed to interact with the other members of their group.
- After the students have finished their review, they can converse with their other group members. The group must produce another review – but this one is by consensus. They must work together to find the most appropriate mark.
- Finally, after the consensus reviews are in, the groups are disbanded. Students are then shown their own code submissions. They must do a final review on their own code by filling in the marking rubric.
- Student’s marks will be based on:
- The mark that the TA gave them
- How closely their individual review of the group submission agrees with the TAs assessment
- How close the consensus review of the group submission agrees with the TAs assessment
- How close the review of their own code agrees with the TAs assessment
From my viewpoint, this model has several obvious strengths and weaknesses.
One major strength is that, even if students do poorly in the coding portion of their assignment, they might still have an opportunity to make it up by learning from their peers during the group consensus review. They’ll also have an opportunity to demonstrate their new-found understanding by reviewing their own code, and admitting the shortcomings.
A major weakness of the idea is the sheer organizational complexity. Did you see how many steps there are? That’s a lot of work.
Plus, the model makes some pretty wild assumptions. A few off the top of my head:
- It assumes students can actually learn by performing peer review together, independent of what piece of code they’re reviewing
- It assumes students will actually reach a final consensus during the group review. What about bullies? What about timid folks?
There are probably more that I’m not seeing yet.
Anyhow, this was an idea I had a few days ago, and I just wanted to write it down.