Treasure Hunting, and Research Idea #4

Remember those research questions Greg had me and Zuzel come up with?  He’s asked us to select one, and find some tools that could help us in an experiment to answer that question.

Originally, my favourite question was my second one:  in the courses where students work in teams, why aren’t the instructors providing or encouraging code review tools?

I’ve already received two answers to this question from instructors here at UofT.

Greg has warned me that this line of inquiry might be a bit too shallow.  He warns that it’s possible that, when asked about code review, they’ll shrug their shoulders and say that they just don’t have time to teach it on top of everything else.  Karen Reid’s response echos this warning, somewhat.

And maybe Steve Easterbrook has a point – that code review is hard to assess as an assignment.  It seems he’s at least tried it. However, it appears that he was using fragments of Fagan Inspection reports as his measuring stick rather than the reviews themselves. I assert that this is where light-weight code review tools could be of some service: to actually see the conversation going on about the code.  To see the review.  I also assert that such a conversation is hard to fake, or at least, to fake well.

So, just go with me on this for a second:  let’s pretend that Steve is going to teach his course again.  Except this time, instead of collecting fragments of Fagan Inspection reports, he uses something like ReviewBoard, Crucible, or Code Collaborator to collect reviews and conversations.  Would it be worth his time?  Would it benefit the students?  Would they see the value of code review?

Reading this blog post, it seems that the Basie folk first got on the ReviewBoard band wagon because Blake Winton set a good example as a code reviewer.  I remember talking to Basie developer Bill Konrad this summer, and him telling me that he’d never go back after seeing the improvement in code quality.

Because that’s the clincher – getting the students to see the value.  You can’t make a horse drink, and you can’t get students to use a tool unless they find it useful.  And how do you show that to them?  How do you show them the value without having to call in Blake Winton?  How do you make them see?  And how do you make the process painless enough so that instructors don’t have to worry about teaching a new, confusing tool?

One of the comments on Greg’s post says the following:

My feeling is that the easier it is to review code, the more interest you’ll see from students.

Maybe that’s really all you need.

So, how about this for an experiment – take a class of students who are working on group assignments, and set them up a copy of a light-weight code review tool.  Get one of the first assignments in the class to involve a review so that they need to use the software at least once.  Now track the usage for the rest of the semester…do they still use it?  Do some use it, and not others?  If so, do the ones who use it tend to receive higher grades?  Or is there no difference?  What is the quality of their reviews when the reviews are not being marked?  What will the students think of review tool once the course is completed?

I think it’s simple enough, and I can’t believe I didn’t think about it earlier.

Some of the software I could use:

Quite a few choices for the review tool.  And I wasn’t even digging that deeply.  Perhaps I should do a quick survey across all of them to see which one would be the best fit for a CS course.  Perhaps.

1 thought on “Treasure Hunting, and Research Idea #4

  1. David Wolever

    I know I’m a little late in the game here… But I entirely agree with what Greg said – the lower the barrier to entry, the higher the chance that students will actually use code review tools.

    Also, you ARE missing one tool from that list… One that is simpler, and has a lower barrier to entry, than any of the ones there 😉

Comments are closed.