by WANG Yan-qing, LI Yi-jun, Michael Collins, LIU Pei-jie
SIGCSE ’08, March 12-15, 2008
If you’ve been following, I’ve been trying to figure out why code reviews aren’t a part of the basic undergraduate computer science curriculum. The other papers and articles I’ve read so far have had less to do with the classroom, and more to do with code reviews in industry.
This paper got a little bit closer to the classroom, and, more importantly, closer to my particular question.
To begin, the paper introduces some terminology I’m not familiar with – the software crisis. I’m familiar with the concept though: writing good software for large systems is not a simple problem, and as computers become a bigger and more important part of our lives, this inability to easily write good code could quickly end up biting us in the collective rear.
Code review is one of several methods that the software industry has adopted to try to “tame” the software crisis.
I like this part:
Even though code reviews are time consuming, they are much more efficient than testing . A typical engineer, for example, will find approximately 2 to 4 defects in an hour of unit testing but will find 6 to 10 defects in each hour of review code .
What more argument do you need? It’s just a matter of getting rid of that “time consuming” part, right? Right…
And this is even juicier:
PCR [peer code review] is a technique which is generally considered to be effective on promoting students’ higher cognitive skills , since students use their own knowledge and skill to interpret, analyze and evaluate others’ work to clarify and correct it .
Wonderful! I’m in my problem space!
Reading along, it seems that this paper is introducing a new, refined structure for PCR, and will detail results of a study on using that new structure in a programming course. Cool.
The introduction ends by saying that the new structure seemed to enhance the quality of student’s work, as well as their ability to critique one another. Great news!
It’s not all sunshine and puppies, though – they also mention that they ran into a few problems, and that they’ll be discussing those too.
So the first thing they’ve done, is tried to make the terminology clearer:
- Author: the student who writes the code that is being reviewed
- Reviewer: the person who is reviewing the code
- Reviser: the author, after receiving a Comments Form from a Reviewer
- Instructor: the teacher or qualified TA who is responsible for the class
- Manuscript Code: the unrevised code that is first submitted by an Author
- Comments Form: the comments given from the Reviewer to the Author
- Revision Code: the code that is revised by the Reviser after the Reviewer gives the Reviser the Comments Form (whew…follow that?)
- Reference Solution: the “answer” to the assignment, held by the Instructor
Now that we’ve got all the players and documents laid out, let’s take a look at the process:
- Phase 1: The Author completes the Manuscript Code
- Phase 2: The Author emails the Manuscript Code to the Instructor. Simultaneously, a blank Comments Form and a copy of the Manuscript Code is sent to a Reviewer
- Phase 3: The Reviewer reviews the code as soon as possible, filling in the Comments Form.
- Phase 4: The Reviewer sends the completed Comments Form back to the Author, and also sends a carbon copy to the Instructor
- Phase 5: After receiving the Comments Form, the Reviser (who was originally the Author…oh boy…almost went cross-eyed, there) makes the appropriate alterations to the original Manuscript Code, referencing the Comment Form where appropriate. The completed Revision Code is emailed to the Instructor.
- Phase 6: The Instructor should now have a copy of the original Manuscript Code, the completed Comments Form, and the final Revision Code. The Instructor should be able to check that the author and reviewer did their work properly.
Wow. What a convoluted way of saying something simple. They even included a diagram, with lots of arrows. Somehow, I think this could be said simpler. Oh well.
It also sounds like a lot of emailing. You’re balancing your course on the reliability of the email protocol? Errr….
Well, let’s see what problems they ran into…
- The assumption that all participants would carefully and responsibly carry out each phase of the process was faulty. This may have been due to “careless authors, irresponsible reviewers and busy instructors in the review process”.
- Some students lack the coding ability to either:
- Produce code that is readable and reviewable in a constructive way
- Review code in a constructive, or informed way
- The process is difficult to control due to the reliance on email (no kidding!)
- Some students would not submit Manuscript Code or Comment Forms on time
- Some students would submit multiple copies of their Manuscript Code, due to an inherent mistrust of the reliability of email
- There was opportunity for students to “game” the process to their advantage. In this particular study, there was very little control of who was doing what. Though a particular Author was supposed to write the Manuscript Code, this wasn’t enforced, and there was an occasion where another student wrote the code instead. Same with review writing, and revision writing. Yeah, cheating is always a problem.
The paper then goes into some discussion about the observed behaviour of Authors and Reviewers. They noted that most students did not enjoy reviewing very poorly written code, and don’t give their best efforts on reviews for such code. Mere encouragement from the instructor was not enough to compel them to give their best reviews either. The paper suggests finding some way of making Reviewers review code more carefully; perhaps through awarding bonus marks.
Behaviour of Instructors was also analyzed. The paper mentioned that Instructors with large class sizes might try to cut down on their workload by only viewing the Comment Forms that the Reviewers had provided. But this strategy does not give the Instructor the entire story, and is open to manipulation from students.
The paper ends with a discussion about group formations, and how best to diffuse student cheating conspiracies.
At the last moment, they suggest some “web-based [application] with a built-in blind review mechanism” be developed.