Tag Archives: education

The Wisdom of Peers: A Motive for Exploring Peer Code Review in the Classroom

A major part of my Master’s degree requirements was my research paper.  If you heard me lament over the past year or so about my “thesis”, I was referring to this research paper.

Anyhow, after lots of hard work, my research paper was finally signed off by my supervisor, Dr. Greg Wilson, and second reader Dr. Yuri Takhteyev.  A huge thanks to both of them!

Here’s the abstract, followed by a download link for the PDF.  Enjoy!

Abstract

Peer code review is commonly used in the software development industry to identify and fix problems during the development process. An additional benefit is that it seems to help spread knowledge and expertise around the team conducting the review. So is it possible to leverage peer code review as a learning tool? Our experiment results show that peer code review seems to cause a performance boost in students. They also show that the average total peer mark generated by students seems to be similar to the total mark that a graduate-level teaching assistant might give. We found that students agree that peer code review teaches them something – however, we also found they do not enjoy grading their peers’ work. We are encouraged by these results, and feel that they are a strong motive for further research in this area.

Click here to download my research paper

Research Experiment: A Recap

Before I start diving into results, I’m just going to recap my experiment so we’re all up to speed.

I’ll try to keep it short, sweet, and punchy – but remember, this is a couple of months of work right here.

Ready?  Here we go.

What I was looking for

A quick refresher on what code review is

Code review is like the software industry equivalent of a taste test.  A developer makes a change to a piece of software, puts that change up for review, and a few reviewers take a look at that change to make sure it’s up to snuff.  If some issues are found during the course of the review, the developer can go back and make revisions.  Once the reviewers give it the thumbs up, the change is put into the software.

That’s an oversimplified description of code review,  but it’ll do for now.

So what?

What’s important is to know that it works. Jason Cohen showed that code review reduces the number of defects that enter the final software product. That’s great!

But there are some other cool advantages to doing code review as well.

  1. It helps to train up new hires.  They can lurk during reviews to see how more experienced developers look at the code.  They get to see what’s happening in other parts of the software.  They get their code reviewed, which means direct, applicable feedback.  All good things.
  2. It helps to clean and homogenize the code.  Since the code will be seen by their peers, developers are generally compelled to not put up “embarrassing” code (or, if they do, to at least try to explain why they did).  Code review is a great way to compel developers to keep their code readable and consistent.
  3. It helps to spread knowledge and good practices around the team.  New hires aren’t the only ones to benefit from code reviews.  There’s always something you can learn from another developer, and code review is where that will happen.  And I believe this is true not just for those who receive the reviews, but also for those who perform the reviews.

That last one is important.  Code review sounds like an excellent teaching tool.

So why isn’t code review part of the standard undergraduate computer science education?  Greg and I hypothesized that the reason that code review isn’t taught is because we don’t know how to teach it.

I’ll quote myself:

What if peer code review isn’t taught in undergraduate courses because we just don’t know how to teach it?  We don’t know how to fit it in to a curriculum that’s already packed to the brim.  We don’t know how to get students to take it seriously.  We don’t know if there’s pedagogical value, let alone how to show such value to the students.

The idea

Inspired by work by Joordens and Pare, Greg and I developed an approach to teaching code review that integrates itself nicely into the current curriculum.

Here’s the basic idea:

Suppose we have a computer programming class.  Also suppose that after each assignment, each student is randomly presented with anonymized assignment submissions from some of their peers.  Students will then be asked to anonymously peer grade these assignment submissions.

Now, before you go howling your head off about the inadequacy / incompetence of student markers, or the PeerScholar debacle, read this next paragraph, because there’s a twist.

The assignment submissions will still be marked by TA’s as usual.  The grades that a student receives from her peers will not directly affect her mark.  Instead, the student is graded based on how well they graded their peers. The peer reviews that a student completes will be compared with the grades that the TA’s delivered.  The closer a student is to the TA, the better the mark they get on their “peer grading” component (which is distinct from the mark they receive for their programming assignment).

Now, granted, the idea still needs some fleshing out, but already, we’ve got some questions that need answering:

  1. Joordens and Pare showed that for short written assignments, you need about 5 peer reviews to predict the mark that the TA will give.  Is this also true for computer programming assignments?
  2. Grading students based on how much their peer grading matches TA grading assumes that the TA is an infallible point of reference.  How often to TA’s disagree amongst themselves?
  3. Would peer grading like this actually make students better programmers?  Is there a significant difference in the quality of their programming after they perform the grading?
  4. What would students think of peer grading computer programming assignments?  How would they feel about it?

So those were my questions.

How I went about looking for the answers

Here’s the design of the experiment in a nutshell:

Writing phase

I have a treatment group, and a control group.  Both groups are composed of undergraduate students.  After writing a short pre-experiment questionnaire, participants in both groups will have half an hour to work on a short programming assignment.  The treatment group will then have another half an hour to peer grade some submissions for the assignment they just wrote.  The submissions that they mark will be mocked up by me, and will be the same for each participant in the treatment group.  The control group will not perform any grading – instead, they will do an unrelated vocabulary exercise for the same amount of time.  Then, participants in either group will have another half an hour to work on the second short programming assignment. Participants in my treatment group will write a short post-experiment questionnaire to get their impressions on their peer grading experience.  Then the participants are released.

Here’s a picture to help you visualize what you just read.

Tasks for each group in my experiment.

So now I’ve got two piles of submissions – one for each assignment, 60 submissions in total.  I add my mock-ups to each pile.  That means 35 submissions in each pile, and 70 submissions in total.

Marking phase

I assign ID numbers to each submission, shuffle them up, and hand them off to some graduate level TA’s that I hired.  The TA’s will grade each assignment using the same marking rubric that the treatment group used to peer grade.  They will not know if they are grading a treatment group submission, a control group submission, or a mock-up.

Choosing phase

After the grading is completed, I remove the mock-ups, and pair up submissions in both piles based on who wrote it.  So now I’ve got 30 pairs of submissions:  one for each student.  I then ask my graders to look at each pair, knowing that they’re both written by the same student, and to choose which one they think is better coded, and to rate and describe the difference (if any) between the two.  This is an attempt to catch possible improvements in the treatment group’s code that might not be captured in the marking rubric.

So that’s what I did

So everything you’ve just read is what I’ve just finished doing.

Once the submissions are marked, I’ll analyze the marks for the following:

  1. Comparing the two groups, is there any significant improvement in the marks from the first assignment to the second in the treatment group?
    1. If there was an improvement, on which criteria?  And how much of an improvement?
  2. How did the students do at grading my mock-ups?  How similar were their peer grades to what the TAs gave?
  3. How much did my two graders agree with one another?
  4. During the choosing phase, did my graders tend to choose the second assignment over the first assignment more often for the treatment group?

And I’ll also analyze the post-experiment questionnaire to get student feedback on their grading experience.

Ok, so that’s where I’m at.  Stay tuned for results.

Challenged

It seems pretty in vogue lately to complain about schools, and the current state of our learning institutions.

I’ve certainly done my fair share of complaining.

Chances are, at some point, you have too.

Anyhow, my sister is a teacher, and I’ve become friends with a bunch of teachers, and I think they know it’s not perfect too.  I’m also pretty sure any system large enough will eventually draw complaints for one thing or another.  Nothing is perfect.

But for all my griping, complaining, and whining throughout school, I’m still glad I did it.  I still think it made me a better person than I would have been without it.

This comic by Stuart McMillen reminded me of that, and I thought I’d share it.

Enjoy.

"Challenged" by Stuart McMillen

"Challenged" by Stuart McMillen

Learn until you die.

Limits of the i>Clicker

This post is from an idea that Karen Reid has posed to me for potential research…

i>Clickers have been around for a few years.  I’ve never had to buy or use one in any of my classes, but it seems like more and more courses are starting to find it useful.

So what is this i>Clicker thing?

An i>Clicker is a handheld wireless device that essentially brings the “ask the audience” portion from Who Wants to Be A Millionaire, into the classroom.

So, each student buys his or her own personal i>Clicker, and registers it for any classes that require it.  During one of those classes, the instructor could throw up a slide that quizzes the students on what was just taught.  Students key in their responses on the i>Clicker, and the results are then displayed up on the screen.

From what I can tell, the idea is that the i>Clicker should encourage more class participation because:

  1. Students answer simultaneously – so instead of instructors choosing a raised hand from the class, the entire class gets polled
  2. Results displayed to the class can be anonymous.  So, instead of remaining silent among your peers out of fear of being publicly wrong, all students can submit an answer, and get feedback that helps them learn

The i>Clicker can also be used to poll students and give the instructor feedback. For example, an instructor could put up a slide that says “How was my lecture today?” and get some anonymous feedback there.

Well, not exactly anonymous.  See, the instructor has the ability to see who submitted what, and when…so if you repeatedly answer quiz questions incorrectly, the instructor can probably detect that you’re misunderstanding, guessing, or just don’t care.

Anyhow, that’s the basic idea behind the i>Clicker.  It’s used in a few classes here at UofT, and I know people who’ve had to purchase ($35+) and use them.

Click here to visit the i>Clicker website

The Limits of the i>Clicker

The amount of data that students can provide through the i>Clicker is pretty limited.  Here’s a photo of a the device:

The iClicker.

Ta da.

Students have a maximum of 5 choices that they can make while being polled.  Instructors are restricted to multiple-choice questions.

Hm.  Can’t we do any better?

Turning Smartphones into an i>Clicking Device

Wifi-enabled “smartphones” are becoming part of everyday life.  It seems like I can’t walk half a block without seeing somebody whip out their iPhone and do something really freakin’ cool with it:

So it’s not really a far fetched idea to imagine that, some day, every student will possess one of these things.

Certainly, something like the iPhone could act as a multiple choice interface.  But is there a way of turning some of that cool touch/gesture/accelerometer stuff into useful polling feedback for students and instructors?

Some Ideas

  1. Instructor puts a graphic up on the board, and asks the students “what’s wrong with this picture?”.  Students look at the picture on their SmartPhone, and use their finger to indicate the portion of the picture that they’re interested in.  After a few seconds, the instructor displays the results – which is a semi-transparent overlay on the image, showing all of the areas that students indicated.  Areas that are of interest to more students are emphasized.I can see this being useful for code reading classes.  The instructor splats a piece of code up on the screen, and asks the students to indicate where the bug is.
  2. Students were asked to mock-up a paper prototype for an interface that they are designing.  The instructor asks all of the students to take a picture of their paper prototype, and submit it on their SmartPhone.  The instructor is then able to put all of the photos up on the screen for discussion.  This could nicely tie in with idea #1.
  3. Students are polled to see how many years they have been programming for.  Students simply type in the number of years, and submit it.  While the i>Clicker restricts the answer to such a question to 5 ranges, the SmartPhone submits the actual answer.  Once collected, the submissions could be displayed on a histogram to give students an accurate impression of the level of experience in the classroom.

Any other ideas?

Exploring Peer Review in the Computer Science Classroom: Part 2 (Exciting Conclusion)

Now, where was I?

Oh yeah, I was reviewing this paper, and getting right to the good part – the experiment method, data, and results.

I hate to disappoint you, but this section starts as a bit of a downer:

…Unfortunately, the data collected was spotty, to say the least, and was not linked together well enough to support a reasonable, detailed analysis to meet our goals.

Yikes.  Oh well, let’s see what happened…

One issue we discovered was that our design was too broad to give definitive results.  We did, however, have enough data to allow us to narrow down the focus for a second study.  From the analysis of one class, we were able to identify two interesting areas for further research.  The type of review appears [sic] have a significant effect on the length and focus of the review.  We also found evidence that students reviewed some of the concepts differently than they did others.  Both of these findings should be explored in the future work.

Good.  At least there’s some groundwork for somebody to do a future study.

Reading on, I’m impressed with the scope with which the writers performed their experiment.  For example, instead of just experimenting on a single classroom, these people used experiments across 8 classrooms, and each classroom had a number of participants ranging from 10 to 60.  Ambitious.  Nice.

While their experiment may have been too broad to get the results they were looking for, it certainly has more authority than the studies that just used a single, small class.

However, maybe I spoke too soon:

The data collected for this study did not occur as smoothly as we would have wished. … As a result, we were able to collect a large amount of data but it is not as complete as we would have liked.”

Hm.  Doesn’t sound that great.

Apparently, data was supposed to be collected from surveys, review rubrics, and questionnaires, but it looks like the questionnaire data kind of fell through:

The number of responses to the questionnaires was low.  Three classes had no post-questionnaire responses at all.  Of those classes that did have responses for the second questionnaire, the number was too small to lend any confidence to a statistical analysis…

Review rubric data was a little more interesting – 996 completed rubrics were collected from 299 reviewers from the 8 classes over the course of the study.  Nice.  But, again, it wasn’t all flowers and hugs:

While the amount of data was large, it is also incomplete.  Of those classes which were intended to have both training and a peer reviews [sic], three of them were not able to complete the second review assignment and, so, have nothing to be compared to.  Two of the other classes have only a moderate number of participants (15-20) which is not as high as we would have liked for our statistical analysis.  One class provided no viable information at all…

Things just seem to get worse and worse for these guys.

And, not to kick them while they’re down, but their writing seems to get worse and worse too.  I’m noticing more typos and tense errors as I go along.  Maybe the stress was getting to them…

So, what did they find?  Drum roll, please…

Final Results

Surprise!  It’s inconclusive!

It sounds like they found more questions than answers…

Is it type or order (or both) that is causing the effects the [sic] training and peer review?

It took me a little while to figure out what they were asking here.  Apparently, before engaging in peer review exercises, students would critique material that was provided by the instructor.  It seems that the training review step, on average, generated more verbose comments (though, not necessarily relevant comments) than the peer review step.  But the peer review step tended to produce more relevant comments.  The experimenters have a couple of theories on why that is:

  1. Social pressure could cause students to be less verbose on their critiques on one another.
  2. Increased learning after the training exercise could cause the students to be more succinct and precise in their reviews
  3. Students were more engaged in the training exercise, and felt more inclined to be more verbose (even though they were less precise)

Unfortunately, the experimenters note, a lot rests on the motivation and attitude of the students, which wasn’t really considered or measured during the design of the experiment.  So, to break it down simple, the type of review (training vs peer) and the order (training first, then peer review), caused some numbers to change in their tables…but they don’t really know why.

They had questions on other things too…

Why are there differences in how concepts are reviewed?

  • Are there differences in conceptual difficulty?
  • Do the reviews improve student learning of these concepts?

The three CS topics that were focused on during these courses were the OOP concepts of Abstraction, Decomposition, and Encapsulation.  The experimenters also theorized that successful reviews go through the following steps:

  1. Analysis
  2. Evaluation
  3. Explanation
  4. Revision

(Unspecified) variations in how the students used these steps, and how verbose they were at each step, caught the experimenters’ attention.  They wonder if this has something to do with the conceptual difficulty of each topic, or if the reviews were effecting their understanding over time.

More questions they brought up…

Is reviewing an engaging and interesting task in computer science?

Very good question.  The experimenters noted that they had no measure of student’s interest, feeling, and engagement in the reviewing process.  They note that it is important to look at these attitudes over time for improvements or problems.

Are there significant learning benefits to reviewing in the early computer science curriculum as compared to other, common homework/lab exercises?

I’ll let them explain this one:

While we have identified a number of potential benefits from reviewing, we have not shown that it is better than or as good as what we currently do.  We require some sort of baseline to compare our efforts to.  We need a control group in our experiments in order to judge effectiveness.

And then the paper pretty much ends.

Where To Go From Here

The authors do a good job of lining up some interesting questions towards the end.  I guess this is how you salvage an experiment that didn’t go as planned – find the deeper questions, and see if somebody else can do a better job.

Or maybe, if you give the authors enough time, they’ll try to do the better job themselves. I think I’ve found the next paper to review.